Test Report: Docker_Linux_crio 21724

                    
                      cdde98f5260d5cfb20fef0dee46a24863d2037a7:2025-10-13:41893
                    
                

Test fail (38/327)

Order failed test Duration
29 TestAddons/serial/Volcano 0.35
35 TestAddons/parallel/Registry 13.45
36 TestAddons/parallel/RegistryCreds 0.44
37 TestAddons/parallel/Ingress 146.39
38 TestAddons/parallel/InspektorGadget 5.28
39 TestAddons/parallel/MetricsServer 5.32
41 TestAddons/parallel/CSI 44.86
42 TestAddons/parallel/Headlamp 2.59
43 TestAddons/parallel/CloudSpanner 5.25
44 TestAddons/parallel/LocalPath 8.13
45 TestAddons/parallel/NvidiaDevicePlugin 5.25
46 TestAddons/parallel/Yakd 5.25
47 TestAddons/parallel/AmdGpuDevicePlugin 5.24
98 TestFunctional/parallel/ServiceCmdConnect 602.99
115 TestFunctional/parallel/ServiceCmd/DeployApp 600.65
118 TestFunctional/parallel/ImageCommands/ImageListShort 2.27
124 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.93
127 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.94
131 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.52
132 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.47
134 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.19
135 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.37
153 TestFunctional/parallel/ServiceCmd/HTTPS 0.53
154 TestFunctional/parallel/ServiceCmd/Format 0.53
155 TestFunctional/parallel/ServiceCmd/URL 0.53
191 TestJSONOutput/pause/Command 2.4
197 TestJSONOutput/unpause/Command 1.58
275 TestPause/serial/Pause 6.2
298 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.59
303 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.12
310 TestStartStop/group/old-k8s-version/serial/Pause 6.85
318 TestStartStop/group/no-preload/serial/Pause 6.34
323 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 2.68
325 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.41
338 TestStartStop/group/newest-cni/serial/Pause 6.35
339 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 2.32
352 TestStartStop/group/default-k8s-diff-port/serial/Pause 6.62
362 TestStartStop/group/embed-certs/serial/Pause 6.39
x
+
TestAddons/serial/Volcano (0.35s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-143775 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-143775 addons disable volcano --alsologtostderr -v=1: exit status 11 (351.722111ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 21:21:02.190757  240517 out.go:360] Setting OutFile to fd 1 ...
	I1013 21:21:02.190907  240517 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:21:02.190918  240517 out.go:374] Setting ErrFile to fd 2...
	I1013 21:21:02.190922  240517 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:21:02.191167  240517 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-226873/.minikube/bin
	I1013 21:21:02.191494  240517 mustload.go:65] Loading cluster: addons-143775
	I1013 21:21:02.191919  240517 config.go:182] Loaded profile config "addons-143775": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:21:02.191938  240517 addons.go:606] checking whether the cluster is paused
	I1013 21:21:02.192032  240517 config.go:182] Loaded profile config "addons-143775": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:21:02.192045  240517 host.go:66] Checking if "addons-143775" exists ...
	I1013 21:21:02.192531  240517 cli_runner.go:164] Run: docker container inspect addons-143775 --format={{.State.Status}}
	I1013 21:21:02.210969  240517 ssh_runner.go:195] Run: systemctl --version
	I1013 21:21:02.211051  240517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-143775
	I1013 21:21:02.228836  240517 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/addons-143775/id_rsa Username:docker}
	I1013 21:21:02.325875  240517 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 21:21:02.325962  240517 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 21:21:02.356506  240517 cri.go:89] found id: "33180043b49d2660b0b0b600c82306371a56f15be0f76fa12958684f8d911ab7"
	I1013 21:21:02.356535  240517 cri.go:89] found id: "29890b5558c66356cd00456d113ffbcb24b0560b6c7702281cc2b7832a9068d6"
	I1013 21:21:02.356540  240517 cri.go:89] found id: "0f56c52e6564ab264ee594edcb66e9f9db567c3d24471d2a8f79d82a5a385ecb"
	I1013 21:21:02.356546  240517 cri.go:89] found id: "dfd7f05ad90ea3b762daf7d97c4592e5f4cbe1ee5068a1ad9aae0dd44a46e977"
	I1013 21:21:02.356550  240517 cri.go:89] found id: "d3f41f21c86bd23b22b1ab82d1c432fc3df136f2ba776767673d0a1e38e70f57"
	I1013 21:21:02.356556  240517 cri.go:89] found id: "178e4409ca2b654b564cbef10d9087938f99ba1aff31a5af597008f5e505b073"
	I1013 21:21:02.356560  240517 cri.go:89] found id: "8d550cc3998c8b6fec3758bb4e81bf21f3792cdc452eaaf1573264c6d0da9c28"
	I1013 21:21:02.356564  240517 cri.go:89] found id: "57bd7bb06e366a05919fc26428aa0bbcd8e88c8e1503a650860ff4f6a69f0061"
	I1013 21:21:02.356569  240517 cri.go:89] found id: "03f55a19579f67bc53cdbf0555efc903f2df5a19107488ff4da9f693ae3d67be"
	I1013 21:21:02.356586  240517 cri.go:89] found id: "37d832fcb8c1f765f5710ea404d8d3238e6fc7a303954f93298b062481a9391f"
	I1013 21:21:02.356592  240517 cri.go:89] found id: "0316d05383999cb939c985fa5634e71b5f4766c07b29cb7b3f2db7cbd6783337"
	I1013 21:21:02.356596  240517 cri.go:89] found id: "630a251fc66ba47575f7dd7a06f4331d0ef17e4f414acb828ab6faab74a9d57d"
	I1013 21:21:02.356600  240517 cri.go:89] found id: "03c7460cdbd20bb306bb9b6b11e7d73452607a8503a269384f8624ceaf29065e"
	I1013 21:21:02.356604  240517 cri.go:89] found id: "0e9754c3036dfd2b0b62663ec77dd65bc2a44adab66d445bdc945a020f3d0fbc"
	I1013 21:21:02.356608  240517 cri.go:89] found id: "e57df483a324fce39e093dadf731dd3ec5c0ce557b47f472dc708e8af7d2b537"
	I1013 21:21:02.356619  240517 cri.go:89] found id: "a21bb2b294cead5d90e3f5593637bc6716719945f5e23d06cf01617fdee3e75e"
	I1013 21:21:02.356626  240517 cri.go:89] found id: "278b4b7546c8cbb271aa40024385c9a2115953314e4b6fd1f291d9686c2f7374"
	I1013 21:21:02.356637  240517 cri.go:89] found id: "e208e9862015d533dd0968f46404f89356f7ec7b3132965b94044f2e6a69cf3b"
	I1013 21:21:02.356642  240517 cri.go:89] found id: "4a3e089044a38a8fa3d24e8f2669f6febf78e7a168cc30e9e798a894d91d7057"
	I1013 21:21:02.356655  240517 cri.go:89] found id: "ac355aa00aaae919e93c529f16cc645b44f261b30a90efb74492d107645c316b"
	I1013 21:21:02.356665  240517 cri.go:89] found id: "fc72bcf650d5a023cf7f3dc7bac0e28433de3d691982471fd25a7d139667d786"
	I1013 21:21:02.356669  240517 cri.go:89] found id: "4f9c304b23eabe02395b1858ed2ac76623d0b6f4887bb6ee139a97ff4d2ea01e"
	I1013 21:21:02.356675  240517 cri.go:89] found id: "c0af2973488b6dc3da55356729afdb472a4832a5e5acfe7dee831dc817711363"
	I1013 21:21:02.356677  240517 cri.go:89] found id: "6cbf217264895ef9824d482611b2de76f8f55105f2f0de78e29b7723da223ae9"
	I1013 21:21:02.356680  240517 cri.go:89] found id: ""
	I1013 21:21:02.356728  240517 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 21:21:02.371470  240517 out.go:203] 
	W1013 21:21:02.373021  240517 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T21:21:02Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T21:21:02Z" level=error msg="open /run/runc: no such file or directory"
	
	W1013 21:21:02.373039  240517 out.go:285] * 
	* 
	W1013 21:21:02.485082  240517 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1013 21:21:02.486744  240517 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-amd64 -p addons-143775 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.35s)

                                                
                                    
x
+
TestAddons/parallel/Registry (13.45s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 3.070171ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-h4pdt" [db159b01-5db6-4300-85e5-55d60d08480c] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.002918377s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-rrhdd" [0cf00a49-8dae-4bc0-9c48-21b177af9830] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003437793s
addons_test.go:392: (dbg) Run:  kubectl --context addons-143775 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-143775 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-143775 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (2.968311419s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-143775 ip
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-143775 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-143775 addons disable registry --alsologtostderr -v=1: exit status 11 (267.49065ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 21:21:23.541470  243109 out.go:360] Setting OutFile to fd 1 ...
	I1013 21:21:23.541780  243109 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:21:23.541792  243109 out.go:374] Setting ErrFile to fd 2...
	I1013 21:21:23.541796  243109 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:21:23.542038  243109 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-226873/.minikube/bin
	I1013 21:21:23.542340  243109 mustload.go:65] Loading cluster: addons-143775
	I1013 21:21:23.542689  243109 config.go:182] Loaded profile config "addons-143775": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:21:23.542704  243109 addons.go:606] checking whether the cluster is paused
	I1013 21:21:23.542783  243109 config.go:182] Loaded profile config "addons-143775": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:21:23.542797  243109 host.go:66] Checking if "addons-143775" exists ...
	I1013 21:21:23.543193  243109 cli_runner.go:164] Run: docker container inspect addons-143775 --format={{.State.Status}}
	I1013 21:21:23.564836  243109 ssh_runner.go:195] Run: systemctl --version
	I1013 21:21:23.564908  243109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-143775
	I1013 21:21:23.587612  243109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/addons-143775/id_rsa Username:docker}
	I1013 21:21:23.688203  243109 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 21:21:23.688314  243109 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 21:21:23.728112  243109 cri.go:89] found id: "33180043b49d2660b0b0b600c82306371a56f15be0f76fa12958684f8d911ab7"
	I1013 21:21:23.728143  243109 cri.go:89] found id: "29890b5558c66356cd00456d113ffbcb24b0560b6c7702281cc2b7832a9068d6"
	I1013 21:21:23.728149  243109 cri.go:89] found id: "0f56c52e6564ab264ee594edcb66e9f9db567c3d24471d2a8f79d82a5a385ecb"
	I1013 21:21:23.728154  243109 cri.go:89] found id: "dfd7f05ad90ea3b762daf7d97c4592e5f4cbe1ee5068a1ad9aae0dd44a46e977"
	I1013 21:21:23.728158  243109 cri.go:89] found id: "d3f41f21c86bd23b22b1ab82d1c432fc3df136f2ba776767673d0a1e38e70f57"
	I1013 21:21:23.728163  243109 cri.go:89] found id: "178e4409ca2b654b564cbef10d9087938f99ba1aff31a5af597008f5e505b073"
	I1013 21:21:23.728167  243109 cri.go:89] found id: "8d550cc3998c8b6fec3758bb4e81bf21f3792cdc452eaaf1573264c6d0da9c28"
	I1013 21:21:23.728171  243109 cri.go:89] found id: "57bd7bb06e366a05919fc26428aa0bbcd8e88c8e1503a650860ff4f6a69f0061"
	I1013 21:21:23.728183  243109 cri.go:89] found id: "03f55a19579f67bc53cdbf0555efc903f2df5a19107488ff4da9f693ae3d67be"
	I1013 21:21:23.728191  243109 cri.go:89] found id: "37d832fcb8c1f765f5710ea404d8d3238e6fc7a303954f93298b062481a9391f"
	I1013 21:21:23.728195  243109 cri.go:89] found id: "0316d05383999cb939c985fa5634e71b5f4766c07b29cb7b3f2db7cbd6783337"
	I1013 21:21:23.728199  243109 cri.go:89] found id: "630a251fc66ba47575f7dd7a06f4331d0ef17e4f414acb828ab6faab74a9d57d"
	I1013 21:21:23.728204  243109 cri.go:89] found id: "03c7460cdbd20bb306bb9b6b11e7d73452607a8503a269384f8624ceaf29065e"
	I1013 21:21:23.728208  243109 cri.go:89] found id: "0e9754c3036dfd2b0b62663ec77dd65bc2a44adab66d445bdc945a020f3d0fbc"
	I1013 21:21:23.728212  243109 cri.go:89] found id: "e57df483a324fce39e093dadf731dd3ec5c0ce557b47f472dc708e8af7d2b537"
	I1013 21:21:23.728226  243109 cri.go:89] found id: "a21bb2b294cead5d90e3f5593637bc6716719945f5e23d06cf01617fdee3e75e"
	I1013 21:21:23.728233  243109 cri.go:89] found id: "278b4b7546c8cbb271aa40024385c9a2115953314e4b6fd1f291d9686c2f7374"
	I1013 21:21:23.728240  243109 cri.go:89] found id: "e208e9862015d533dd0968f46404f89356f7ec7b3132965b94044f2e6a69cf3b"
	I1013 21:21:23.728244  243109 cri.go:89] found id: "4a3e089044a38a8fa3d24e8f2669f6febf78e7a168cc30e9e798a894d91d7057"
	I1013 21:21:23.728248  243109 cri.go:89] found id: "ac355aa00aaae919e93c529f16cc645b44f261b30a90efb74492d107645c316b"
	I1013 21:21:23.728252  243109 cri.go:89] found id: "fc72bcf650d5a023cf7f3dc7bac0e28433de3d691982471fd25a7d139667d786"
	I1013 21:21:23.728256  243109 cri.go:89] found id: "4f9c304b23eabe02395b1858ed2ac76623d0b6f4887bb6ee139a97ff4d2ea01e"
	I1013 21:21:23.728261  243109 cri.go:89] found id: "c0af2973488b6dc3da55356729afdb472a4832a5e5acfe7dee831dc817711363"
	I1013 21:21:23.728272  243109 cri.go:89] found id: "6cbf217264895ef9824d482611b2de76f8f55105f2f0de78e29b7723da223ae9"
	I1013 21:21:23.728279  243109 cri.go:89] found id: ""
	I1013 21:21:23.728328  243109 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 21:21:23.745210  243109 out.go:203] 
	W1013 21:21:23.746386  243109 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T21:21:23Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T21:21:23Z" level=error msg="open /run/runc: no such file or directory"
	
	W1013 21:21:23.746401  243109 out.go:285] * 
	* 
	W1013 21:21:23.749784  243109 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1013 21:21:23.751011  243109 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-amd64 -p addons-143775 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (13.45s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.44s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.326219ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-143775
addons_test.go:332: (dbg) Run:  kubectl --context addons-143775 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-143775 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-143775 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (273.985943ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 21:21:15.848854  241861 out.go:360] Setting OutFile to fd 1 ...
	I1013 21:21:15.849239  241861 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:21:15.849257  241861 out.go:374] Setting ErrFile to fd 2...
	I1013 21:21:15.849263  241861 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:21:15.849569  241861 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-226873/.minikube/bin
	I1013 21:21:15.850099  241861 mustload.go:65] Loading cluster: addons-143775
	I1013 21:21:15.850631  241861 config.go:182] Loaded profile config "addons-143775": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:21:15.850655  241861 addons.go:606] checking whether the cluster is paused
	I1013 21:21:15.850800  241861 config.go:182] Loaded profile config "addons-143775": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:21:15.850819  241861 host.go:66] Checking if "addons-143775" exists ...
	I1013 21:21:15.851402  241861 cli_runner.go:164] Run: docker container inspect addons-143775 --format={{.State.Status}}
	I1013 21:21:15.871096  241861 ssh_runner.go:195] Run: systemctl --version
	I1013 21:21:15.871173  241861 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-143775
	I1013 21:21:15.892654  241861 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/addons-143775/id_rsa Username:docker}
	I1013 21:21:16.000828  241861 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 21:21:16.000896  241861 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 21:21:16.039240  241861 cri.go:89] found id: "33180043b49d2660b0b0b600c82306371a56f15be0f76fa12958684f8d911ab7"
	I1013 21:21:16.039267  241861 cri.go:89] found id: "29890b5558c66356cd00456d113ffbcb24b0560b6c7702281cc2b7832a9068d6"
	I1013 21:21:16.039273  241861 cri.go:89] found id: "0f56c52e6564ab264ee594edcb66e9f9db567c3d24471d2a8f79d82a5a385ecb"
	I1013 21:21:16.039278  241861 cri.go:89] found id: "dfd7f05ad90ea3b762daf7d97c4592e5f4cbe1ee5068a1ad9aae0dd44a46e977"
	I1013 21:21:16.039284  241861 cri.go:89] found id: "d3f41f21c86bd23b22b1ab82d1c432fc3df136f2ba776767673d0a1e38e70f57"
	I1013 21:21:16.039290  241861 cri.go:89] found id: "178e4409ca2b654b564cbef10d9087938f99ba1aff31a5af597008f5e505b073"
	I1013 21:21:16.039296  241861 cri.go:89] found id: "8d550cc3998c8b6fec3758bb4e81bf21f3792cdc452eaaf1573264c6d0da9c28"
	I1013 21:21:16.039301  241861 cri.go:89] found id: "57bd7bb06e366a05919fc26428aa0bbcd8e88c8e1503a650860ff4f6a69f0061"
	I1013 21:21:16.039306  241861 cri.go:89] found id: "03f55a19579f67bc53cdbf0555efc903f2df5a19107488ff4da9f693ae3d67be"
	I1013 21:21:16.039319  241861 cri.go:89] found id: "37d832fcb8c1f765f5710ea404d8d3238e6fc7a303954f93298b062481a9391f"
	I1013 21:21:16.039324  241861 cri.go:89] found id: "0316d05383999cb939c985fa5634e71b5f4766c07b29cb7b3f2db7cbd6783337"
	I1013 21:21:16.039329  241861 cri.go:89] found id: "630a251fc66ba47575f7dd7a06f4331d0ef17e4f414acb828ab6faab74a9d57d"
	I1013 21:21:16.039334  241861 cri.go:89] found id: "03c7460cdbd20bb306bb9b6b11e7d73452607a8503a269384f8624ceaf29065e"
	I1013 21:21:16.039339  241861 cri.go:89] found id: "0e9754c3036dfd2b0b62663ec77dd65bc2a44adab66d445bdc945a020f3d0fbc"
	I1013 21:21:16.039345  241861 cri.go:89] found id: "e57df483a324fce39e093dadf731dd3ec5c0ce557b47f472dc708e8af7d2b537"
	I1013 21:21:16.039359  241861 cri.go:89] found id: "a21bb2b294cead5d90e3f5593637bc6716719945f5e23d06cf01617fdee3e75e"
	I1013 21:21:16.039365  241861 cri.go:89] found id: "278b4b7546c8cbb271aa40024385c9a2115953314e4b6fd1f291d9686c2f7374"
	I1013 21:21:16.039372  241861 cri.go:89] found id: "e208e9862015d533dd0968f46404f89356f7ec7b3132965b94044f2e6a69cf3b"
	I1013 21:21:16.039378  241861 cri.go:89] found id: "4a3e089044a38a8fa3d24e8f2669f6febf78e7a168cc30e9e798a894d91d7057"
	I1013 21:21:16.039383  241861 cri.go:89] found id: "ac355aa00aaae919e93c529f16cc645b44f261b30a90efb74492d107645c316b"
	I1013 21:21:16.039388  241861 cri.go:89] found id: "fc72bcf650d5a023cf7f3dc7bac0e28433de3d691982471fd25a7d139667d786"
	I1013 21:21:16.039401  241861 cri.go:89] found id: "4f9c304b23eabe02395b1858ed2ac76623d0b6f4887bb6ee139a97ff4d2ea01e"
	I1013 21:21:16.039407  241861 cri.go:89] found id: "c0af2973488b6dc3da55356729afdb472a4832a5e5acfe7dee831dc817711363"
	I1013 21:21:16.039426  241861 cri.go:89] found id: "6cbf217264895ef9824d482611b2de76f8f55105f2f0de78e29b7723da223ae9"
	I1013 21:21:16.039431  241861 cri.go:89] found id: ""
	I1013 21:21:16.039493  241861 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 21:21:16.057035  241861 out.go:203] 
	W1013 21:21:16.058555  241861 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T21:21:16Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T21:21:16Z" level=error msg="open /run/runc: no such file or directory"
	
	W1013 21:21:16.058585  241861 out.go:285] * 
	* 
	W1013 21:21:16.061803  241861 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1013 21:21:16.063212  241861 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-amd64 -p addons-143775 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.44s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (146.39s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-143775 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-143775 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-143775 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [946de537-a733-4d4d-a412-eda4e4cde55a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [946de537-a733-4d4d-a412-eda4e4cde55a] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.003786199s
I1013 21:21:24.040421  230929 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-143775 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-143775 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m14.678949186s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-143775 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-143775 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-143775
helpers_test.go:243: (dbg) docker inspect addons-143775:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "541f9fcc19e3cfb62f371a3d70f52d04352b4b1c1570742330b1a02e20d8a8c1",
	        "Created": "2025-10-13T21:18:52.880794569Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 232908,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-13T21:18:52.91981774Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:496f866fde942b6f85dc3d23428f27ed11a5cd69b522133c43b8dc97e7575c9e",
	        "ResolvConfPath": "/var/lib/docker/containers/541f9fcc19e3cfb62f371a3d70f52d04352b4b1c1570742330b1a02e20d8a8c1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/541f9fcc19e3cfb62f371a3d70f52d04352b4b1c1570742330b1a02e20d8a8c1/hostname",
	        "HostsPath": "/var/lib/docker/containers/541f9fcc19e3cfb62f371a3d70f52d04352b4b1c1570742330b1a02e20d8a8c1/hosts",
	        "LogPath": "/var/lib/docker/containers/541f9fcc19e3cfb62f371a3d70f52d04352b4b1c1570742330b1a02e20d8a8c1/541f9fcc19e3cfb62f371a3d70f52d04352b4b1c1570742330b1a02e20d8a8c1-json.log",
	        "Name": "/addons-143775",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-143775:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-143775",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "541f9fcc19e3cfb62f371a3d70f52d04352b4b1c1570742330b1a02e20d8a8c1",
	                "LowerDir": "/var/lib/docker/overlay2/d5ae37240a7894fc9d462336fb8242eb8d870b0241d674ba67879f6a4f41cbe2-init/diff:/var/lib/docker/overlay2/d6236b573fee274727d414fe7dfb0718c2f1a4b8ebed995b4196b3231a8d31a4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d5ae37240a7894fc9d462336fb8242eb8d870b0241d674ba67879f6a4f41cbe2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d5ae37240a7894fc9d462336fb8242eb8d870b0241d674ba67879f6a4f41cbe2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d5ae37240a7894fc9d462336fb8242eb8d870b0241d674ba67879f6a4f41cbe2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-143775",
	                "Source": "/var/lib/docker/volumes/addons-143775/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-143775",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-143775",
	                "name.minikube.sigs.k8s.io": "addons-143775",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "99ed5aa952ed99b68aef33c633333fdfdd4632dee17b0907a84d2df70a94220e",
	            "SandboxKey": "/var/run/docker/netns/99ed5aa952ed",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-143775": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "36:ab:16:a2:ad:68",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b6cb13af425017a4154ca14bd547d1a6dd94adbcf73f90e6de0d88aea7818eb1",
	                    "EndpointID": "854498c27153985a651286f2f972df7bc39ab2aba4fa5217184799f1a30e7ce5",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-143775",
	                        "541f9fcc19e3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-143775 -n addons-143775
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-143775 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-143775 logs -n 25: (1.232079771s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ --download-only -p binary-mirror-784602 --alsologtostderr --binary-mirror http://127.0.0.1:46779 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-784602 │ jenkins │ v1.37.0 │ 13 Oct 25 21:18 UTC │                     │
	│ delete  │ -p binary-mirror-784602                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-784602 │ jenkins │ v1.37.0 │ 13 Oct 25 21:18 UTC │ 13 Oct 25 21:18 UTC │
	│ addons  │ disable dashboard -p addons-143775                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-143775        │ jenkins │ v1.37.0 │ 13 Oct 25 21:18 UTC │                     │
	│ addons  │ enable dashboard -p addons-143775                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-143775        │ jenkins │ v1.37.0 │ 13 Oct 25 21:18 UTC │                     │
	│ start   │ -p addons-143775 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-143775        │ jenkins │ v1.37.0 │ 13 Oct 25 21:18 UTC │ 13 Oct 25 21:21 UTC │
	│ addons  │ addons-143775 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-143775        │ jenkins │ v1.37.0 │ 13 Oct 25 21:21 UTC │                     │
	│ addons  │ addons-143775 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-143775        │ jenkins │ v1.37.0 │ 13 Oct 25 21:21 UTC │                     │
	│ addons  │ enable headlamp -p addons-143775 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-143775        │ jenkins │ v1.37.0 │ 13 Oct 25 21:21 UTC │                     │
	│ addons  │ addons-143775 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-143775        │ jenkins │ v1.37.0 │ 13 Oct 25 21:21 UTC │                     │
	│ addons  │ addons-143775 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-143775        │ jenkins │ v1.37.0 │ 13 Oct 25 21:21 UTC │                     │
	│ addons  │ addons-143775 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-143775        │ jenkins │ v1.37.0 │ 13 Oct 25 21:21 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-143775                                                                                                                                                                                                                                                                                                                                                                                           │ addons-143775        │ jenkins │ v1.37.0 │ 13 Oct 25 21:21 UTC │ 13 Oct 25 21:21 UTC │
	│ addons  │ addons-143775 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-143775        │ jenkins │ v1.37.0 │ 13 Oct 25 21:21 UTC │                     │
	│ addons  │ addons-143775 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-143775        │ jenkins │ v1.37.0 │ 13 Oct 25 21:21 UTC │                     │
	│ addons  │ addons-143775 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-143775        │ jenkins │ v1.37.0 │ 13 Oct 25 21:21 UTC │                     │
	│ addons  │ addons-143775 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-143775        │ jenkins │ v1.37.0 │ 13 Oct 25 21:21 UTC │                     │
	│ ip      │ addons-143775 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-143775        │ jenkins │ v1.37.0 │ 13 Oct 25 21:21 UTC │ 13 Oct 25 21:21 UTC │
	│ addons  │ addons-143775 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-143775        │ jenkins │ v1.37.0 │ 13 Oct 25 21:21 UTC │                     │
	│ ssh     │ addons-143775 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-143775        │ jenkins │ v1.37.0 │ 13 Oct 25 21:21 UTC │                     │
	│ addons  │ addons-143775 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-143775        │ jenkins │ v1.37.0 │ 13 Oct 25 21:21 UTC │                     │
	│ ssh     │ addons-143775 ssh cat /opt/local-path-provisioner/pvc-e6be8790-906c-4030-973a-777621257e3a_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-143775        │ jenkins │ v1.37.0 │ 13 Oct 25 21:21 UTC │ 13 Oct 25 21:21 UTC │
	│ addons  │ addons-143775 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-143775        │ jenkins │ v1.37.0 │ 13 Oct 25 21:21 UTC │                     │
	│ addons  │ addons-143775 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-143775        │ jenkins │ v1.37.0 │ 13 Oct 25 21:22 UTC │                     │
	│ addons  │ addons-143775 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-143775        │ jenkins │ v1.37.0 │ 13 Oct 25 21:22 UTC │                     │
	│ ip      │ addons-143775 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-143775        │ jenkins │ v1.37.0 │ 13 Oct 25 21:23 UTC │ 13 Oct 25 21:23 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 21:18:29
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 21:18:29.147526  232267 out.go:360] Setting OutFile to fd 1 ...
	I1013 21:18:29.147770  232267 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:18:29.147778  232267 out.go:374] Setting ErrFile to fd 2...
	I1013 21:18:29.147782  232267 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:18:29.147987  232267 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-226873/.minikube/bin
	I1013 21:18:29.148567  232267 out.go:368] Setting JSON to false
	I1013 21:18:29.149409  232267 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3657,"bootTime":1760386652,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1013 21:18:29.149509  232267 start.go:141] virtualization: kvm guest
	I1013 21:18:29.151721  232267 out.go:179] * [addons-143775] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1013 21:18:29.153116  232267 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 21:18:29.153156  232267 notify.go:220] Checking for updates...
	I1013 21:18:29.156011  232267 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 21:18:29.157458  232267 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-226873/kubeconfig
	I1013 21:18:29.158915  232267 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-226873/.minikube
	I1013 21:18:29.160458  232267 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1013 21:18:29.162155  232267 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 21:18:29.163972  232267 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 21:18:29.187801  232267 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1013 21:18:29.187888  232267 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 21:18:29.244400  232267 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:46 SystemTime:2025-10-13 21:18:29.233917433 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652166656 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1013 21:18:29.244506  232267 docker.go:318] overlay module found
	I1013 21:18:29.246528  232267 out.go:179] * Using the docker driver based on user configuration
	I1013 21:18:29.247881  232267 start.go:305] selected driver: docker
	I1013 21:18:29.247896  232267 start.go:925] validating driver "docker" against <nil>
	I1013 21:18:29.247911  232267 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 21:18:29.248484  232267 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 21:18:29.308417  232267 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:46 SystemTime:2025-10-13 21:18:29.298843777 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652166656 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1013 21:18:29.308608  232267 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1013 21:18:29.308808  232267 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 21:18:29.310786  232267 out.go:179] * Using Docker driver with root privileges
	I1013 21:18:29.312109  232267 cni.go:84] Creating CNI manager for ""
	I1013 21:18:29.312175  232267 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 21:18:29.312186  232267 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1013 21:18:29.312269  232267 start.go:349] cluster config:
	{Name:addons-143775 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-143775 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1013 21:18:29.313557  232267 out.go:179] * Starting "addons-143775" primary control-plane node in "addons-143775" cluster
	I1013 21:18:29.314888  232267 cache.go:123] Beginning downloading kic base image for docker with crio
	I1013 21:18:29.316147  232267 out.go:179] * Pulling base image v0.0.48-1760363564-21724 ...
	I1013 21:18:29.317190  232267 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 21:18:29.317232  232267 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-226873/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1013 21:18:29.317244  232267 cache.go:58] Caching tarball of preloaded images
	I1013 21:18:29.317320  232267 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon
	I1013 21:18:29.317342  232267 preload.go:233] Found /home/jenkins/minikube-integration/21724-226873/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1013 21:18:29.317350  232267 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1013 21:18:29.317688  232267 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/addons-143775/config.json ...
	I1013 21:18:29.317713  232267 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/addons-143775/config.json: {Name:mk86885072ff6639c3332c248fc6f7264e47968c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 21:18:29.333580  232267 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 to local cache
	I1013 21:18:29.333720  232267 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local cache directory
	I1013 21:18:29.333736  232267 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local cache directory, skipping pull
	I1013 21:18:29.333741  232267 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 exists in cache, skipping pull
	I1013 21:18:29.333749  232267 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 as a tarball
	I1013 21:18:29.333753  232267 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 from local cache
	I1013 21:18:42.074621  232267 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 from cached tarball
	I1013 21:18:42.074662  232267 cache.go:232] Successfully downloaded all kic artifacts
	I1013 21:18:42.074715  232267 start.go:360] acquireMachinesLock for addons-143775: {Name:mk6f74072f84c857b4a9fd47fd2ff103ee669eed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 21:18:42.074840  232267 start.go:364] duration metric: took 101.596µs to acquireMachinesLock for "addons-143775"
	I1013 21:18:42.074867  232267 start.go:93] Provisioning new machine with config: &{Name:addons-143775 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-143775 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1013 21:18:42.074952  232267 start.go:125] createHost starting for "" (driver="docker")
	I1013 21:18:42.076987  232267 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1013 21:18:42.077257  232267 start.go:159] libmachine.API.Create for "addons-143775" (driver="docker")
	I1013 21:18:42.077292  232267 client.go:168] LocalClient.Create starting
	I1013 21:18:42.077398  232267 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem
	I1013 21:18:42.169983  232267 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/cert.pem
	I1013 21:18:42.339369  232267 cli_runner.go:164] Run: docker network inspect addons-143775 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1013 21:18:42.356062  232267 cli_runner.go:211] docker network inspect addons-143775 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1013 21:18:42.356149  232267 network_create.go:284] running [docker network inspect addons-143775] to gather additional debugging logs...
	I1013 21:18:42.356176  232267 cli_runner.go:164] Run: docker network inspect addons-143775
	W1013 21:18:42.373346  232267 cli_runner.go:211] docker network inspect addons-143775 returned with exit code 1
	I1013 21:18:42.373379  232267 network_create.go:287] error running [docker network inspect addons-143775]: docker network inspect addons-143775: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-143775 not found
	I1013 21:18:42.373393  232267 network_create.go:289] output of [docker network inspect addons-143775]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-143775 not found
	
	** /stderr **
	I1013 21:18:42.373479  232267 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 21:18:42.392347  232267 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001bf6dc0}
	I1013 21:18:42.392397  232267 network_create.go:124] attempt to create docker network addons-143775 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1013 21:18:42.392452  232267 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-143775 addons-143775
	I1013 21:18:42.451093  232267 network_create.go:108] docker network addons-143775 192.168.49.0/24 created
	I1013 21:18:42.451132  232267 kic.go:121] calculated static IP "192.168.49.2" for the "addons-143775" container
	I1013 21:18:42.451209  232267 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1013 21:18:42.467750  232267 cli_runner.go:164] Run: docker volume create addons-143775 --label name.minikube.sigs.k8s.io=addons-143775 --label created_by.minikube.sigs.k8s.io=true
	I1013 21:18:42.488372  232267 oci.go:103] Successfully created a docker volume addons-143775
	I1013 21:18:42.488448  232267 cli_runner.go:164] Run: docker run --rm --name addons-143775-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-143775 --entrypoint /usr/bin/test -v addons-143775:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -d /var/lib
	I1013 21:18:48.444317  232267 cli_runner.go:217] Completed: docker run --rm --name addons-143775-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-143775 --entrypoint /usr/bin/test -v addons-143775:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -d /var/lib: (5.95582857s)
	I1013 21:18:48.444346  232267 oci.go:107] Successfully prepared a docker volume addons-143775
	I1013 21:18:48.444397  232267 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 21:18:48.444422  232267 kic.go:194] Starting extracting preloaded images to volume ...
	I1013 21:18:48.444471  232267 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21724-226873/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-143775:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -I lz4 -xf /preloaded.tar -C /extractDir
	I1013 21:18:52.806171  232267 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21724-226873/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-143775:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -I lz4 -xf /preloaded.tar -C /extractDir: (4.361659169s)
	I1013 21:18:52.806208  232267 kic.go:203] duration metric: took 4.361780531s to extract preloaded images to volume ...
	W1013 21:18:52.806321  232267 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1013 21:18:52.806379  232267 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1013 21:18:52.806439  232267 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1013 21:18:52.865156  232267 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-143775 --name addons-143775 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-143775 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-143775 --network addons-143775 --ip 192.168.49.2 --volume addons-143775:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225
	I1013 21:18:53.169603  232267 cli_runner.go:164] Run: docker container inspect addons-143775 --format={{.State.Running}}
	I1013 21:18:53.188321  232267 cli_runner.go:164] Run: docker container inspect addons-143775 --format={{.State.Status}}
	I1013 21:18:53.206791  232267 cli_runner.go:164] Run: docker exec addons-143775 stat /var/lib/dpkg/alternatives/iptables
	I1013 21:18:53.249460  232267 oci.go:144] the created container "addons-143775" has a running status.
	I1013 21:18:53.249498  232267 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21724-226873/.minikube/machines/addons-143775/id_rsa...
	I1013 21:18:53.498970  232267 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21724-226873/.minikube/machines/addons-143775/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1013 21:18:53.527121  232267 cli_runner.go:164] Run: docker container inspect addons-143775 --format={{.State.Status}}
	I1013 21:18:53.546199  232267 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1013 21:18:53.546218  232267 kic_runner.go:114] Args: [docker exec --privileged addons-143775 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1013 21:18:53.592450  232267 cli_runner.go:164] Run: docker container inspect addons-143775 --format={{.State.Status}}
	I1013 21:18:53.611539  232267 machine.go:93] provisionDockerMachine start ...
	I1013 21:18:53.611634  232267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-143775
	I1013 21:18:53.631081  232267 main.go:141] libmachine: Using SSH client type: native
	I1013 21:18:53.631436  232267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1013 21:18:53.631455  232267 main.go:141] libmachine: About to run SSH command:
	hostname
	I1013 21:18:53.766806  232267 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-143775
	
	I1013 21:18:53.766839  232267 ubuntu.go:182] provisioning hostname "addons-143775"
	I1013 21:18:53.766908  232267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-143775
	I1013 21:18:53.784799  232267 main.go:141] libmachine: Using SSH client type: native
	I1013 21:18:53.785107  232267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1013 21:18:53.785127  232267 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-143775 && echo "addons-143775" | sudo tee /etc/hostname
	I1013 21:18:53.931488  232267 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-143775
	
	I1013 21:18:53.931573  232267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-143775
	I1013 21:18:53.949462  232267 main.go:141] libmachine: Using SSH client type: native
	I1013 21:18:53.949686  232267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1013 21:18:53.949704  232267 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-143775' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-143775/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-143775' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1013 21:18:54.085163  232267 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 21:18:54.085196  232267 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21724-226873/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-226873/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-226873/.minikube}
	I1013 21:18:54.085216  232267 ubuntu.go:190] setting up certificates
	I1013 21:18:54.085231  232267 provision.go:84] configureAuth start
	I1013 21:18:54.085292  232267 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-143775
	I1013 21:18:54.102149  232267 provision.go:143] copyHostCerts
	I1013 21:18:54.102225  232267 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-226873/.minikube/ca.pem (1078 bytes)
	I1013 21:18:54.102333  232267 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-226873/.minikube/cert.pem (1123 bytes)
	I1013 21:18:54.102408  232267 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-226873/.minikube/key.pem (1679 bytes)
	I1013 21:18:54.102470  232267 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-226873/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca-key.pem org=jenkins.addons-143775 san=[127.0.0.1 192.168.49.2 addons-143775 localhost minikube]
	I1013 21:18:54.690134  232267 provision.go:177] copyRemoteCerts
	I1013 21:18:54.690198  232267 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1013 21:18:54.690235  232267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-143775
	I1013 21:18:54.707909  232267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/addons-143775/id_rsa Username:docker}
	I1013 21:18:54.805379  232267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1013 21:18:54.824930  232267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1013 21:18:54.842129  232267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1013 21:18:54.859907  232267 provision.go:87] duration metric: took 774.657382ms to configureAuth
	I1013 21:18:54.859937  232267 ubuntu.go:206] setting minikube options for container-runtime
	I1013 21:18:54.860132  232267 config.go:182] Loaded profile config "addons-143775": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:18:54.860234  232267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-143775
	I1013 21:18:54.878285  232267 main.go:141] libmachine: Using SSH client type: native
	I1013 21:18:54.878568  232267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1013 21:18:54.878598  232267 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1013 21:18:55.129338  232267 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1013 21:18:55.129365  232267 machine.go:96] duration metric: took 1.517805295s to provisionDockerMachine
	I1013 21:18:55.129376  232267 client.go:171] duration metric: took 13.05207559s to LocalClient.Create
	I1013 21:18:55.129399  232267 start.go:167] duration metric: took 13.05214319s to libmachine.API.Create "addons-143775"
	I1013 21:18:55.129409  232267 start.go:293] postStartSetup for "addons-143775" (driver="docker")
	I1013 21:18:55.129422  232267 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1013 21:18:55.129495  232267 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1013 21:18:55.129535  232267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-143775
	I1013 21:18:55.147012  232267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/addons-143775/id_rsa Username:docker}
	I1013 21:18:55.246440  232267 ssh_runner.go:195] Run: cat /etc/os-release
	I1013 21:18:55.250304  232267 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1013 21:18:55.250336  232267 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1013 21:18:55.250361  232267 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-226873/.minikube/addons for local assets ...
	I1013 21:18:55.250428  232267 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-226873/.minikube/files for local assets ...
	I1013 21:18:55.250463  232267 start.go:296] duration metric: took 121.046279ms for postStartSetup
	I1013 21:18:55.250838  232267 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-143775
	I1013 21:18:55.269349  232267 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/addons-143775/config.json ...
	I1013 21:18:55.269600  232267 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1013 21:18:55.269644  232267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-143775
	I1013 21:18:55.287756  232267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/addons-143775/id_rsa Username:docker}
	I1013 21:18:55.382649  232267 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1013 21:18:55.387628  232267 start.go:128] duration metric: took 13.312657051s to createHost
	I1013 21:18:55.387663  232267 start.go:83] releasing machines lock for "addons-143775", held for 13.312808942s
	I1013 21:18:55.387741  232267 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-143775
	I1013 21:18:55.405333  232267 ssh_runner.go:195] Run: cat /version.json
	I1013 21:18:55.405390  232267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-143775
	I1013 21:18:55.405430  232267 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1013 21:18:55.405502  232267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-143775
	I1013 21:18:55.423559  232267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/addons-143775/id_rsa Username:docker}
	I1013 21:18:55.424434  232267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/addons-143775/id_rsa Username:docker}
	I1013 21:18:55.589342  232267 ssh_runner.go:195] Run: systemctl --version
	I1013 21:18:55.596018  232267 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1013 21:18:55.631915  232267 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1013 21:18:55.637016  232267 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1013 21:18:55.637109  232267 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1013 21:18:55.664188  232267 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1013 21:18:55.664211  232267 start.go:495] detecting cgroup driver to use...
	I1013 21:18:55.664242  232267 detect.go:190] detected "systemd" cgroup driver on host os
	I1013 21:18:55.664287  232267 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1013 21:18:55.681064  232267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1013 21:18:55.693317  232267 docker.go:218] disabling cri-docker service (if available) ...
	I1013 21:18:55.693377  232267 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1013 21:18:55.711205  232267 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1013 21:18:55.728254  232267 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1013 21:18:55.809453  232267 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1013 21:18:55.897555  232267 docker.go:234] disabling docker service ...
	I1013 21:18:55.897624  232267 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1013 21:18:55.916736  232267 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1013 21:18:55.929287  232267 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1013 21:18:56.010190  232267 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1013 21:18:56.090789  232267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1013 21:18:56.103736  232267 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1013 21:18:56.118082  232267 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1013 21:18:56.118220  232267 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 21:18:56.128671  232267 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1013 21:18:56.128742  232267 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 21:18:56.137825  232267 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 21:18:56.146681  232267 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 21:18:56.155657  232267 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1013 21:18:56.163628  232267 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 21:18:56.172429  232267 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 21:18:56.185874  232267 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 21:18:56.194984  232267 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1013 21:18:56.202230  232267 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1013 21:18:56.202302  232267 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1013 21:18:56.215132  232267 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1013 21:18:56.223049  232267 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 21:18:56.301730  232267 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1013 21:18:56.410733  232267 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1013 21:18:56.410828  232267 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1013 21:18:56.414875  232267 start.go:563] Will wait 60s for crictl version
	I1013 21:18:56.414936  232267 ssh_runner.go:195] Run: which crictl
	I1013 21:18:56.418378  232267 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1013 21:18:56.443823  232267 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1013 21:18:56.443945  232267 ssh_runner.go:195] Run: crio --version
	I1013 21:18:56.473167  232267 ssh_runner.go:195] Run: crio --version
	I1013 21:18:56.503689  232267 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1013 21:18:56.505042  232267 cli_runner.go:164] Run: docker network inspect addons-143775 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 21:18:56.522101  232267 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1013 21:18:56.526661  232267 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 21:18:56.537178  232267 kubeadm.go:883] updating cluster {Name:addons-143775 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-143775 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1013 21:18:56.537306  232267 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 21:18:56.537351  232267 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 21:18:56.569234  232267 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 21:18:56.569256  232267 crio.go:433] Images already preloaded, skipping extraction
	I1013 21:18:56.569323  232267 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 21:18:56.595801  232267 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 21:18:56.595824  232267 cache_images.go:85] Images are preloaded, skipping loading
	I1013 21:18:56.595834  232267 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1013 21:18:56.595931  232267 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-143775 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-143775 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1013 21:18:56.596010  232267 ssh_runner.go:195] Run: crio config
	I1013 21:18:56.644147  232267 cni.go:84] Creating CNI manager for ""
	I1013 21:18:56.644171  232267 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 21:18:56.644192  232267 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1013 21:18:56.644215  232267 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-143775 NodeName:addons-143775 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1013 21:18:56.644343  232267 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-143775"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1013 21:18:56.644406  232267 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1013 21:18:56.652843  232267 binaries.go:44] Found k8s binaries, skipping transfer
	I1013 21:18:56.652917  232267 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1013 21:18:56.660599  232267 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1013 21:18:56.673353  232267 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1013 21:18:56.689193  232267 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1013 21:18:56.701807  232267 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1013 21:18:56.705528  232267 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 21:18:56.715395  232267 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 21:18:56.796490  232267 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 21:18:56.819561  232267 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/addons-143775 for IP: 192.168.49.2
	I1013 21:18:56.819590  232267 certs.go:195] generating shared ca certs ...
	I1013 21:18:56.819614  232267 certs.go:227] acquiring lock for ca certs: {Name:mk5abdb742abbab05bc35d961f97579f54806d56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 21:18:56.819794  232267 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-226873/.minikube/ca.key
	I1013 21:18:57.001219  232267 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-226873/.minikube/ca.crt ...
	I1013 21:18:57.001251  232267 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/ca.crt: {Name:mk442cacdce4a6ea7cb8d8b5f3e18c2cb5b41a72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 21:18:57.001465  232267 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-226873/.minikube/ca.key ...
	I1013 21:18:57.001484  232267 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/ca.key: {Name:mk947158710d75502a659246e73cfaf047ddaa6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 21:18:57.001606  232267 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-226873/.minikube/proxy-client-ca.key
	I1013 21:18:57.234866  232267 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-226873/.minikube/proxy-client-ca.crt ...
	I1013 21:18:57.234897  232267 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/proxy-client-ca.crt: {Name:mkd17add24a0f0553350cea006f2a6bd06f30ab5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 21:18:57.235127  232267 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-226873/.minikube/proxy-client-ca.key ...
	I1013 21:18:57.235145  232267 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/proxy-client-ca.key: {Name:mk7e152fded2964a3684c36e3bb4e18c4de83b1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 21:18:57.235938  232267 certs.go:257] generating profile certs ...
	I1013 21:18:57.236030  232267 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/addons-143775/client.key
	I1013 21:18:57.236046  232267 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/addons-143775/client.crt with IP's: []
	I1013 21:18:57.491047  232267 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/addons-143775/client.crt ...
	I1013 21:18:57.491083  232267 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/addons-143775/client.crt: {Name:mk47d1fb7df9c27d51cd11a02a41a0743b4626a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 21:18:57.492031  232267 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/addons-143775/client.key ...
	I1013 21:18:57.492058  232267 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/addons-143775/client.key: {Name:mkd68ac8b82a5a1fc9ca1e02e750598a36d378bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 21:18:57.492181  232267 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/addons-143775/apiserver.key.8af6ed4f
	I1013 21:18:57.492209  232267 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/addons-143775/apiserver.crt.8af6ed4f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1013 21:18:57.779964  232267 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/addons-143775/apiserver.crt.8af6ed4f ...
	I1013 21:18:57.780021  232267 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/addons-143775/apiserver.crt.8af6ed4f: {Name:mk5474854b39339081ddc249e46c8872150290d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 21:18:57.780235  232267 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/addons-143775/apiserver.key.8af6ed4f ...
	I1013 21:18:57.780251  232267 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/addons-143775/apiserver.key.8af6ed4f: {Name:mk2e53c5b1006a3824e0dcf9d2a9e6f2dd7dc117 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 21:18:57.781247  232267 certs.go:382] copying /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/addons-143775/apiserver.crt.8af6ed4f -> /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/addons-143775/apiserver.crt
	I1013 21:18:57.781358  232267 certs.go:386] copying /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/addons-143775/apiserver.key.8af6ed4f -> /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/addons-143775/apiserver.key
	I1013 21:18:57.781417  232267 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/addons-143775/proxy-client.key
	I1013 21:18:57.781438  232267 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/addons-143775/proxy-client.crt with IP's: []
	I1013 21:18:58.073898  232267 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/addons-143775/proxy-client.crt ...
	I1013 21:18:58.073934  232267 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/addons-143775/proxy-client.crt: {Name:mk29cd6287fc0bc16dd0ea89fa692a61a7cf9e2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 21:18:58.074809  232267 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/addons-143775/proxy-client.key ...
	I1013 21:18:58.074834  232267 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/addons-143775/proxy-client.key: {Name:mk8a49e4c36d349fc420cf9cb89bbc074ddbfed2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 21:18:58.075059  232267 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca-key.pem (1675 bytes)
	I1013 21:18:58.075095  232267 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem (1078 bytes)
	I1013 21:18:58.075118  232267 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/cert.pem (1123 bytes)
	I1013 21:18:58.075145  232267 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/key.pem (1679 bytes)
	I1013 21:18:58.075871  232267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1013 21:18:58.094651  232267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1013 21:18:58.112187  232267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1013 21:18:58.129591  232267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1013 21:18:58.147216  232267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/addons-143775/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1013 21:18:58.164640  232267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/addons-143775/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1013 21:18:58.182581  232267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/addons-143775/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1013 21:18:58.200661  232267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/addons-143775/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1013 21:18:58.219100  232267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1013 21:18:58.239390  232267 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1013 21:18:58.252743  232267 ssh_runner.go:195] Run: openssl version
	I1013 21:18:58.258948  232267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1013 21:18:58.270635  232267 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1013 21:18:58.274847  232267 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 13 21:18 /usr/share/ca-certificates/minikubeCA.pem
	I1013 21:18:58.274919  232267 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1013 21:18:58.309458  232267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1013 21:18:58.318459  232267 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1013 21:18:58.322202  232267 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1013 21:18:58.322277  232267 kubeadm.go:400] StartCluster: {Name:addons-143775 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-143775 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 21:18:58.322355  232267 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 21:18:58.322400  232267 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 21:18:58.350150  232267 cri.go:89] found id: ""
	I1013 21:18:58.350235  232267 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1013 21:18:58.358583  232267 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1013 21:18:58.366632  232267 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1013 21:18:58.366700  232267 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1013 21:18:58.374505  232267 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1013 21:18:58.374528  232267 kubeadm.go:157] found existing configuration files:
	
	I1013 21:18:58.374581  232267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1013 21:18:58.382287  232267 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1013 21:18:58.382354  232267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1013 21:18:58.390247  232267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1013 21:18:58.397657  232267 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1013 21:18:58.397720  232267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1013 21:18:58.404977  232267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1013 21:18:58.412539  232267 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1013 21:18:58.412612  232267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1013 21:18:58.419857  232267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1013 21:18:58.427350  232267 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1013 21:18:58.427410  232267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1013 21:18:58.435923  232267 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1013 21:18:58.475126  232267 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1013 21:18:58.475233  232267 kubeadm.go:318] [preflight] Running pre-flight checks
	I1013 21:18:58.497722  232267 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1013 21:18:58.497815  232267 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1013 21:18:58.497909  232267 kubeadm.go:318] OS: Linux
	I1013 21:18:58.498030  232267 kubeadm.go:318] CGROUPS_CPU: enabled
	I1013 21:18:58.498132  232267 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1013 21:18:58.498195  232267 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1013 21:18:58.498256  232267 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1013 21:18:58.498326  232267 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1013 21:18:58.498389  232267 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1013 21:18:58.498449  232267 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1013 21:18:58.498505  232267 kubeadm.go:318] CGROUPS_IO: enabled
	I1013 21:18:58.555080  232267 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1013 21:18:58.555232  232267 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1013 21:18:58.555335  232267 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1013 21:18:58.563650  232267 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1013 21:18:58.566746  232267 out.go:252]   - Generating certificates and keys ...
	I1013 21:18:58.566858  232267 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1013 21:18:58.566949  232267 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1013 21:18:58.883483  232267 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1013 21:18:59.078252  232267 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1013 21:18:59.368868  232267 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1013 21:18:59.557505  232267 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1013 21:18:59.848183  232267 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1013 21:18:59.848295  232267 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-143775 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1013 21:18:59.962229  232267 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1013 21:18:59.962395  232267 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-143775 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1013 21:19:00.153490  232267 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1013 21:19:00.459371  232267 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1013 21:19:00.647354  232267 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1013 21:19:00.647420  232267 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1013 21:19:00.809640  232267 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1013 21:19:01.006072  232267 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1013 21:19:01.299669  232267 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1013 21:19:01.368197  232267 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1013 21:19:01.436233  232267 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1013 21:19:01.436977  232267 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1013 21:19:01.441684  232267 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1013 21:19:01.443941  232267 out.go:252]   - Booting up control plane ...
	I1013 21:19:01.444093  232267 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1013 21:19:01.444252  232267 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1013 21:19:01.444358  232267 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1013 21:19:01.461873  232267 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1013 21:19:01.461973  232267 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1013 21:19:01.469052  232267 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1013 21:19:01.470086  232267 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1013 21:19:01.470157  232267 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1013 21:19:01.569452  232267 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1013 21:19:01.569631  232267 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1013 21:19:02.570459  232267 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.000955025s
	I1013 21:19:02.574009  232267 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1013 21:19:02.574154  232267 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1013 21:19:02.574266  232267 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1013 21:19:02.574405  232267 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1013 21:19:04.539550  232267 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.965534565s
	I1013 21:19:05.177437  232267 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.603531217s
	I1013 21:19:06.075120  232267 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 3.501172935s
	I1013 21:19:06.086622  232267 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1013 21:19:06.097933  232267 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1013 21:19:06.107625  232267 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1013 21:19:06.107907  232267 kubeadm.go:318] [mark-control-plane] Marking the node addons-143775 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1013 21:19:06.117237  232267 kubeadm.go:318] [bootstrap-token] Using token: mrcpwn.vva6go2h8n9djyuw
	I1013 21:19:06.118794  232267 out.go:252]   - Configuring RBAC rules ...
	I1013 21:19:06.118917  232267 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1013 21:19:06.122141  232267 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1013 21:19:06.127672  232267 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1013 21:19:06.131315  232267 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1013 21:19:06.133917  232267 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1013 21:19:06.136680  232267 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1013 21:19:06.480666  232267 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1013 21:19:06.901241  232267 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1013 21:19:07.482451  232267 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1013 21:19:07.483247  232267 kubeadm.go:318] 
	I1013 21:19:07.483312  232267 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1013 21:19:07.483320  232267 kubeadm.go:318] 
	I1013 21:19:07.483388  232267 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1013 21:19:07.483394  232267 kubeadm.go:318] 
	I1013 21:19:07.483419  232267 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1013 21:19:07.483476  232267 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1013 21:19:07.483538  232267 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1013 21:19:07.483551  232267 kubeadm.go:318] 
	I1013 21:19:07.483612  232267 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1013 21:19:07.483621  232267 kubeadm.go:318] 
	I1013 21:19:07.483690  232267 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1013 21:19:07.483699  232267 kubeadm.go:318] 
	I1013 21:19:07.483765  232267 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1013 21:19:07.483882  232267 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1013 21:19:07.483943  232267 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1013 21:19:07.483950  232267 kubeadm.go:318] 
	I1013 21:19:07.484104  232267 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1013 21:19:07.484228  232267 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1013 21:19:07.484270  232267 kubeadm.go:318] 
	I1013 21:19:07.484406  232267 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token mrcpwn.vva6go2h8n9djyuw \
	I1013 21:19:07.484545  232267 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:00bf5d6d0f4ef7bd9334b23e90d8fd2b7e452995fe6a96d4fd7aebd6540a9956 \
	I1013 21:19:07.484586  232267 kubeadm.go:318] 	--control-plane 
	I1013 21:19:07.484596  232267 kubeadm.go:318] 
	I1013 21:19:07.484708  232267 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1013 21:19:07.484717  232267 kubeadm.go:318] 
	I1013 21:19:07.484818  232267 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token mrcpwn.vva6go2h8n9djyuw \
	I1013 21:19:07.484955  232267 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:00bf5d6d0f4ef7bd9334b23e90d8fd2b7e452995fe6a96d4fd7aebd6540a9956 
	I1013 21:19:07.487290  232267 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1013 21:19:07.487441  232267 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1013 21:19:07.487494  232267 cni.go:84] Creating CNI manager for ""
	I1013 21:19:07.487516  232267 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 21:19:07.489286  232267 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1013 21:19:07.490633  232267 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1013 21:19:07.495132  232267 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1013 21:19:07.495151  232267 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1013 21:19:07.508587  232267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1013 21:19:07.726563  232267 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1013 21:19:07.726668  232267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 21:19:07.726701  232267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-143775 minikube.k8s.io/updated_at=2025_10_13T21_19_07_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=18273cc699fc357fbf4f93654efe4966698a9f22 minikube.k8s.io/name=addons-143775 minikube.k8s.io/primary=true
	I1013 21:19:07.738222  232267 ops.go:34] apiserver oom_adj: -16
	I1013 21:19:07.818042  232267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 21:19:08.318779  232267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 21:19:08.818314  232267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 21:19:09.318874  232267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 21:19:09.818354  232267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 21:19:10.319054  232267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 21:19:10.818141  232267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 21:19:11.319068  232267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 21:19:11.819093  232267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 21:19:12.318176  232267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 21:19:12.386449  232267 kubeadm.go:1113] duration metric: took 4.659843165s to wait for elevateKubeSystemPrivileges
	I1013 21:19:12.386497  232267 kubeadm.go:402] duration metric: took 14.064226802s to StartCluster
	I1013 21:19:12.386526  232267 settings.go:142] acquiring lock: {Name:mk13008e3b2fce0e368bddbf00d43b8340210d33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 21:19:12.386702  232267 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-226873/kubeconfig
	I1013 21:19:12.387314  232267 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/kubeconfig: {Name:mk2f336b13d09ff6e6da9e86905651541ce51ca0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 21:19:12.387506  232267 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1013 21:19:12.387537  232267 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1013 21:19:12.387593  232267 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1013 21:19:12.387766  232267 addons.go:69] Setting yakd=true in profile "addons-143775"
	I1013 21:19:12.387776  232267 config.go:182] Loaded profile config "addons-143775": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:19:12.387781  232267 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-143775"
	I1013 21:19:12.387794  232267 addons.go:238] Setting addon yakd=true in "addons-143775"
	I1013 21:19:12.387814  232267 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-143775"
	I1013 21:19:12.387829  232267 host.go:66] Checking if "addons-143775" exists ...
	I1013 21:19:12.387865  232267 host.go:66] Checking if "addons-143775" exists ...
	I1013 21:19:12.387876  232267 addons.go:69] Setting storage-provisioner=true in profile "addons-143775"
	I1013 21:19:12.387889  232267 addons.go:238] Setting addon storage-provisioner=true in "addons-143775"
	I1013 21:19:12.387876  232267 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-143775"
	I1013 21:19:12.387915  232267 host.go:66] Checking if "addons-143775" exists ...
	I1013 21:19:12.387900  232267 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-143775"
	I1013 21:19:12.388070  232267 addons.go:69] Setting metrics-server=true in profile "addons-143775"
	I1013 21:19:12.387971  232267 addons.go:69] Setting default-storageclass=true in profile "addons-143775"
	I1013 21:19:12.388094  232267 addons.go:69] Setting volcano=true in profile "addons-143775"
	I1013 21:19:12.388105  232267 addons.go:69] Setting registry-creds=true in profile "addons-143775"
	I1013 21:19:12.388111  232267 addons.go:238] Setting addon volcano=true in "addons-143775"
	I1013 21:19:12.388113  232267 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-143775"
	I1013 21:19:12.388116  232267 addons.go:69] Setting volumesnapshots=true in profile "addons-143775"
	I1013 21:19:12.388122  232267 addons.go:238] Setting addon registry-creds=true in "addons-143775"
	I1013 21:19:12.388130  232267 addons.go:238] Setting addon volumesnapshots=true in "addons-143775"
	I1013 21:19:12.388165  232267 host.go:66] Checking if "addons-143775" exists ...
	I1013 21:19:12.388189  232267 host.go:66] Checking if "addons-143775" exists ...
	I1013 21:19:12.388191  232267 host.go:66] Checking if "addons-143775" exists ...
	I1013 21:19:12.388207  232267 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-143775"
	I1013 21:19:12.388222  232267 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-143775"
	I1013 21:19:12.388509  232267 cli_runner.go:164] Run: docker container inspect addons-143775 --format={{.State.Status}}
	I1013 21:19:12.388509  232267 cli_runner.go:164] Run: docker container inspect addons-143775 --format={{.State.Status}}
	I1013 21:19:12.388571  232267 cli_runner.go:164] Run: docker container inspect addons-143775 --format={{.State.Status}}
	I1013 21:19:12.388096  232267 addons.go:238] Setting addon metrics-server=true in "addons-143775"
	I1013 21:19:12.388615  232267 host.go:66] Checking if "addons-143775" exists ...
	I1013 21:19:12.388671  232267 cli_runner.go:164] Run: docker container inspect addons-143775 --format={{.State.Status}}
	I1013 21:19:12.388675  232267 cli_runner.go:164] Run: docker container inspect addons-143775 --format={{.State.Status}}
	I1013 21:19:12.388750  232267 cli_runner.go:164] Run: docker container inspect addons-143775 --format={{.State.Status}}
	I1013 21:19:12.388079  232267 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-143775"
	I1013 21:19:12.388984  232267 host.go:66] Checking if "addons-143775" exists ...
	I1013 21:19:12.388018  232267 addons.go:69] Setting inspektor-gadget=true in profile "addons-143775"
	I1013 21:19:12.389169  232267 addons.go:238] Setting addon inspektor-gadget=true in "addons-143775"
	I1013 21:19:12.389192  232267 host.go:66] Checking if "addons-143775" exists ...
	I1013 21:19:12.389576  232267 cli_runner.go:164] Run: docker container inspect addons-143775 --format={{.State.Status}}
	I1013 21:19:12.389620  232267 cli_runner.go:164] Run: docker container inspect addons-143775 --format={{.State.Status}}
	I1013 21:19:12.388034  232267 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-143775"
	I1013 21:19:12.390115  232267 host.go:66] Checking if "addons-143775" exists ...
	I1013 21:19:12.388011  232267 addons.go:69] Setting ingress-dns=true in profile "addons-143775"
	I1013 21:19:12.390522  232267 addons.go:238] Setting addon ingress-dns=true in "addons-143775"
	I1013 21:19:12.390539  232267 cli_runner.go:164] Run: docker container inspect addons-143775 --format={{.State.Status}}
	I1013 21:19:12.390568  232267 host.go:66] Checking if "addons-143775" exists ...
	I1013 21:19:12.388008  232267 addons.go:69] Setting cloud-spanner=true in profile "addons-143775"
	I1013 21:19:12.390613  232267 addons.go:238] Setting addon cloud-spanner=true in "addons-143775"
	I1013 21:19:12.390627  232267 cli_runner.go:164] Run: docker container inspect addons-143775 --format={{.State.Status}}
	I1013 21:19:12.390650  232267 host.go:66] Checking if "addons-143775" exists ...
	I1013 21:19:12.387980  232267 addons.go:69] Setting gcp-auth=true in profile "addons-143775"
	I1013 21:19:12.390946  232267 mustload.go:65] Loading cluster: addons-143775
	I1013 21:19:12.388510  232267 cli_runner.go:164] Run: docker container inspect addons-143775 --format={{.State.Status}}
	I1013 21:19:12.388077  232267 addons.go:69] Setting registry=true in profile "addons-143775"
	I1013 21:19:12.391381  232267 addons.go:238] Setting addon registry=true in "addons-143775"
	I1013 21:19:12.391420  232267 host.go:66] Checking if "addons-143775" exists ...
	I1013 21:19:12.389072  232267 cli_runner.go:164] Run: docker container inspect addons-143775 --format={{.State.Status}}
	I1013 21:19:12.388003  232267 addons.go:69] Setting ingress=true in profile "addons-143775"
	I1013 21:19:12.391698  232267 addons.go:238] Setting addon ingress=true in "addons-143775"
	I1013 21:19:12.391740  232267 host.go:66] Checking if "addons-143775" exists ...
	I1013 21:19:12.391821  232267 out.go:179] * Verifying Kubernetes components...
	I1013 21:19:12.394548  232267 cli_runner.go:164] Run: docker container inspect addons-143775 --format={{.State.Status}}
	I1013 21:19:12.394917  232267 config.go:182] Loaded profile config "addons-143775": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:19:12.394962  232267 cli_runner.go:164] Run: docker container inspect addons-143775 --format={{.State.Status}}
	I1013 21:19:12.395668  232267 cli_runner.go:164] Run: docker container inspect addons-143775 --format={{.State.Status}}
	I1013 21:19:12.395739  232267 cli_runner.go:164] Run: docker container inspect addons-143775 --format={{.State.Status}}
	I1013 21:19:12.396161  232267 cli_runner.go:164] Run: docker container inspect addons-143775 --format={{.State.Status}}
	I1013 21:19:12.396459  232267 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 21:19:12.438336  232267 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1013 21:19:12.440526  232267 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1013 21:19:12.440655  232267 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1013 21:19:12.440857  232267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-143775
	I1013 21:19:12.444691  232267 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1013 21:19:12.446077  232267 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1013 21:19:12.446103  232267 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1013 21:19:12.446177  232267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-143775
	I1013 21:19:12.465046  232267 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1013 21:19:12.465185  232267 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1013 21:19:12.467483  232267 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1013 21:19:12.467509  232267 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1013 21:19:12.467586  232267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-143775
	I1013 21:19:12.469426  232267 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1013 21:19:12.469446  232267 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1013 21:19:12.469506  232267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-143775
	I1013 21:19:12.473485  232267 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1013 21:19:12.474383  232267 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1013 21:19:12.474712  232267 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1013 21:19:12.474735  232267 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1013 21:19:12.474802  232267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-143775
	I1013 21:19:12.477743  232267 addons.go:238] Setting addon default-storageclass=true in "addons-143775"
	I1013 21:19:12.477834  232267 host.go:66] Checking if "addons-143775" exists ...
	I1013 21:19:12.478032  232267 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-143775"
	I1013 21:19:12.478424  232267 host.go:66] Checking if "addons-143775" exists ...
	I1013 21:19:12.478357  232267 cli_runner.go:164] Run: docker container inspect addons-143775 --format={{.State.Status}}
	I1013 21:19:12.479181  232267 cli_runner.go:164] Run: docker container inspect addons-143775 --format={{.State.Status}}
	I1013 21:19:12.480489  232267 out.go:179]   - Using image docker.io/registry:3.0.0
	I1013 21:19:12.481676  232267 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1013 21:19:12.481700  232267 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1013 21:19:12.481757  232267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-143775
	I1013 21:19:12.485089  232267 host.go:66] Checking if "addons-143775" exists ...
	W1013 21:19:12.493225  232267 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1013 21:19:12.505444  232267 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1013 21:19:12.507062  232267 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1013 21:19:12.508823  232267 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1013 21:19:12.510687  232267 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1013 21:19:12.510903  232267 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1013 21:19:12.511192  232267 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1013 21:19:12.511308  232267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-143775
	I1013 21:19:12.512745  232267 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1013 21:19:12.512810  232267 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1013 21:19:12.512893  232267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-143775
	I1013 21:19:12.523603  232267 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1013 21:19:12.525086  232267 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1013 21:19:12.525118  232267 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1013 21:19:12.525191  232267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-143775
	I1013 21:19:12.526094  232267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/addons-143775/id_rsa Username:docker}
	I1013 21:19:12.526624  232267 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1013 21:19:12.530839  232267 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1013 21:19:12.532353  232267 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1013 21:19:12.532394  232267 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1013 21:19:12.534156  232267 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 21:19:12.534178  232267 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1013 21:19:12.534234  232267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-143775
	I1013 21:19:12.534796  232267 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1013 21:19:12.536094  232267 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1013 21:19:12.536112  232267 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1013 21:19:12.536117  232267 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1013 21:19:12.536131  232267 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1013 21:19:12.536188  232267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-143775
	I1013 21:19:12.536436  232267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-143775
	I1013 21:19:12.536944  232267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/addons-143775/id_rsa Username:docker}
	I1013 21:19:12.537890  232267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/addons-143775/id_rsa Username:docker}
	I1013 21:19:12.542663  232267 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1013 21:19:12.544906  232267 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1013 21:19:12.548057  232267 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1013 21:19:12.550519  232267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/addons-143775/id_rsa Username:docker}
	I1013 21:19:12.555953  232267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/addons-143775/id_rsa Username:docker}
	I1013 21:19:12.556679  232267 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1013 21:19:12.558159  232267 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1013 21:19:12.559291  232267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/addons-143775/id_rsa Username:docker}
	I1013 21:19:12.564394  232267 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1013 21:19:12.565795  232267 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1013 21:19:12.567037  232267 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1013 21:19:12.567062  232267 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1013 21:19:12.567132  232267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-143775
	I1013 21:19:12.570912  232267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/addons-143775/id_rsa Username:docker}
	I1013 21:19:12.576046  232267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/addons-143775/id_rsa Username:docker}
	I1013 21:19:12.585343  232267 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1013 21:19:12.586796  232267 out.go:179]   - Using image docker.io/busybox:stable
	I1013 21:19:12.587584  232267 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1013 21:19:12.587607  232267 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1013 21:19:12.587672  232267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-143775
	I1013 21:19:12.590079  232267 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1013 21:19:12.590103  232267 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1013 21:19:12.590163  232267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-143775
	I1013 21:19:12.597460  232267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/addons-143775/id_rsa Username:docker}
	I1013 21:19:12.609439  232267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/addons-143775/id_rsa Username:docker}
	I1013 21:19:12.622142  232267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/addons-143775/id_rsa Username:docker}
	I1013 21:19:12.629302  232267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/addons-143775/id_rsa Username:docker}
	I1013 21:19:12.633357  232267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/addons-143775/id_rsa Username:docker}
	I1013 21:19:12.637087  232267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/addons-143775/id_rsa Username:docker}
	I1013 21:19:12.644340  232267 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 21:19:12.647519  232267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/addons-143775/id_rsa Username:docker}
	I1013 21:19:12.728806  232267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1013 21:19:12.736704  232267 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1013 21:19:12.736739  232267 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1013 21:19:12.740874  232267 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1013 21:19:12.740898  232267 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1013 21:19:12.760547  232267 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1013 21:19:12.760580  232267 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1013 21:19:12.761914  232267 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1013 21:19:12.761963  232267 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1013 21:19:12.764879  232267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1013 21:19:12.773374  232267 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1013 21:19:12.773402  232267 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1013 21:19:12.777976  232267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1013 21:19:12.804773  232267 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1013 21:19:12.804807  232267 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1013 21:19:12.823742  232267 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1013 21:19:12.823777  232267 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1013 21:19:12.825195  232267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1013 21:19:12.827343  232267 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1013 21:19:12.827364  232267 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1013 21:19:12.830381  232267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 21:19:12.835258  232267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1013 21:19:12.837959  232267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1013 21:19:12.841572  232267 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1013 21:19:12.841649  232267 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1013 21:19:12.850062  232267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1013 21:19:12.857826  232267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1013 21:19:12.862808  232267 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1013 21:19:12.862908  232267 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1013 21:19:12.876079  232267 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1013 21:19:12.876209  232267 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1013 21:19:12.881458  232267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1013 21:19:12.883267  232267 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1013 21:19:12.883358  232267 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1013 21:19:12.887626  232267 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1013 21:19:12.887713  232267 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1013 21:19:12.893675  232267 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1013 21:19:12.893700  232267 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1013 21:19:12.931391  232267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 21:19:12.932311  232267 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1013 21:19:12.932390  232267 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1013 21:19:12.937787  232267 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1013 21:19:12.937815  232267 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1013 21:19:12.946827  232267 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1013 21:19:12.946913  232267 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1013 21:19:12.952719  232267 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1013 21:19:12.952745  232267 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1013 21:19:12.956135  232267 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1013 21:19:12.957389  232267 node_ready.go:35] waiting up to 6m0s for node "addons-143775" to be "Ready" ...
	I1013 21:19:12.975459  232267 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1013 21:19:12.975491  232267 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1013 21:19:13.022710  232267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1013 21:19:13.024236  232267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1013 21:19:13.047600  232267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1013 21:19:13.067978  232267 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1013 21:19:13.068023  232267 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1013 21:19:13.181486  232267 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1013 21:19:13.181513  232267 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1013 21:19:13.255511  232267 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1013 21:19:13.255612  232267 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1013 21:19:13.327950  232267 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1013 21:19:13.327980  232267 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1013 21:19:13.378650  232267 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1013 21:19:13.378743  232267 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1013 21:19:13.442126  232267 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1013 21:19:13.442344  232267 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1013 21:19:13.470751  232267 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-143775" context rescaled to 1 replicas
	I1013 21:19:13.495680  232267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1013 21:19:14.064133  232267 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.228836662s)
	I1013 21:19:14.064180  232267 addons.go:479] Verifying addon ingress=true in "addons-143775"
	I1013 21:19:14.064196  232267 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.226106902s)
	I1013 21:19:14.064292  232267 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.182810084s)
	I1013 21:19:14.064427  232267 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.133008106s)
	I1013 21:19:14.064252  232267 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.214107775s)
	W1013 21:19:14.064467  232267 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:19:14.064491  232267 retry.go:31] will retry after 247.153057ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:19:14.064275  232267 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.206420858s)
	I1013 21:19:14.064551  232267 addons.go:479] Verifying addon registry=true in "addons-143775"
	I1013 21:19:14.064656  232267 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.040377637s)
	I1013 21:19:14.064674  232267 addons.go:479] Verifying addon metrics-server=true in "addons-143775"
	I1013 21:19:14.064709  232267 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.041763375s)
	I1013 21:19:14.066335  232267 out.go:179] * Verifying ingress addon...
	I1013 21:19:14.067109  232267 out.go:179] * Verifying registry addon...
	I1013 21:19:14.067110  232267 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-143775 service yakd-dashboard -n yakd-dashboard
	
	I1013 21:19:14.068917  232267 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1013 21:19:14.069525  232267 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1013 21:19:14.073922  232267 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1013 21:19:14.073946  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:14.074441  232267 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1013 21:19:14.074461  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:14.312319  232267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 21:19:14.544334  232267 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.49667013s)
	W1013 21:19:14.544394  232267 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1013 21:19:14.544424  232267 retry.go:31] will retry after 268.468741ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1013 21:19:14.544599  232267 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.048874196s)
	I1013 21:19:14.544641  232267 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-143775"
	I1013 21:19:14.546469  232267 out.go:179] * Verifying csi-hostpath-driver addon...
	I1013 21:19:14.548967  232267 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1013 21:19:14.552244  232267 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1013 21:19:14.552267  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:14.571552  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:14.571638  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:14.813701  232267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	W1013 21:19:14.948797  232267 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:19:14.948835  232267 retry.go:31] will retry after 494.855457ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1013 21:19:14.960754  232267 node_ready.go:57] node "addons-143775" has "Ready":"False" status (will retry)
	I1013 21:19:15.053208  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:15.072813  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:15.072869  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:15.444876  232267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 21:19:15.553533  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:15.572938  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:15.573066  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:16.052477  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:16.072500  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:16.072553  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:16.552700  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:16.572528  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:16.572706  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1013 21:19:16.960946  232267 node_ready.go:57] node "addons-143775" has "Ready":"False" status (will retry)
	I1013 21:19:17.053198  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:17.072155  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:17.072764  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:17.323474  232267 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.509718378s)
	I1013 21:19:17.323568  232267 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.878642242s)
	W1013 21:19:17.323616  232267 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:19:17.323643  232267 retry.go:31] will retry after 760.118509ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:19:17.552957  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:17.572842  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:17.573040  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:18.052713  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:18.072459  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:18.072666  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:18.084619  232267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 21:19:18.552740  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:18.571796  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:18.572482  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1013 21:19:18.629718  232267 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:19:18.629754  232267 retry.go:31] will retry after 711.667599ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1013 21:19:18.961106  232267 node_ready.go:57] node "addons-143775" has "Ready":"False" status (will retry)
	I1013 21:19:19.052619  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:19.072396  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:19.072537  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:19.341866  232267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 21:19:19.553211  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:19.572543  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:19.572597  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1013 21:19:19.892593  232267 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:19:19.892624  232267 retry.go:31] will retry after 1.664296033s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:19:20.052726  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:20.072308  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:20.072454  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:20.092278  232267 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1013 21:19:20.092347  232267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-143775
	I1013 21:19:20.109838  232267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/addons-143775/id_rsa Username:docker}
	I1013 21:19:20.214851  232267 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1013 21:19:20.227959  232267 addons.go:238] Setting addon gcp-auth=true in "addons-143775"
	I1013 21:19:20.228037  232267 host.go:66] Checking if "addons-143775" exists ...
	I1013 21:19:20.228733  232267 cli_runner.go:164] Run: docker container inspect addons-143775 --format={{.State.Status}}
	I1013 21:19:20.246542  232267 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1013 21:19:20.246636  232267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-143775
	I1013 21:19:20.263783  232267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/addons-143775/id_rsa Username:docker}
	I1013 21:19:20.360131  232267 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1013 21:19:20.361585  232267 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1013 21:19:20.362733  232267 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1013 21:19:20.362751  232267 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1013 21:19:20.376699  232267 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1013 21:19:20.376727  232267 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1013 21:19:20.390084  232267 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1013 21:19:20.390111  232267 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1013 21:19:20.403258  232267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1013 21:19:20.552762  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:20.572564  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:20.572830  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:20.719141  232267 addons.go:479] Verifying addon gcp-auth=true in "addons-143775"
	I1013 21:19:20.720560  232267 out.go:179] * Verifying gcp-auth addon...
	I1013 21:19:20.722367  232267 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1013 21:19:20.724895  232267 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1013 21:19:20.724911  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:21.052694  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:21.072648  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:21.072879  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:21.225583  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1013 21:19:21.460567  232267 node_ready.go:57] node "addons-143775" has "Ready":"False" status (will retry)
	I1013 21:19:21.552680  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:21.557826  232267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 21:19:21.572455  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:21.572596  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:21.725209  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:22.052384  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:22.072217  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:22.072340  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1013 21:19:22.117104  232267 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:19:22.117151  232267 retry.go:31] will retry after 1.694109804s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:19:22.226080  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:22.552821  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:22.572561  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:22.572827  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:22.725585  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:23.052733  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:23.072460  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:23.073034  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:23.225926  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1013 21:19:23.461022  232267 node_ready.go:57] node "addons-143775" has "Ready":"False" status (will retry)
	I1013 21:19:23.553078  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:23.572162  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:23.572552  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:23.725447  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:23.811527  232267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 21:19:24.052884  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:24.073030  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:24.073279  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:24.226466  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1013 21:19:24.360624  232267 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:19:24.360653  232267 retry.go:31] will retry after 3.369253123s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:19:24.552292  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:24.572299  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:24.572409  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:24.726139  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:25.052301  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:25.072145  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:25.072346  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:25.226360  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:25.552389  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:25.572296  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:25.572430  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:25.726234  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1013 21:19:25.960962  232267 node_ready.go:57] node "addons-143775" has "Ready":"False" status (will retry)
	I1013 21:19:26.053371  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:26.072103  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:26.072325  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:26.226147  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:26.552181  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:26.572323  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:26.572561  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:26.725311  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:27.052611  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:27.072288  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:27.072312  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:27.226263  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:27.551921  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:27.572916  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:27.572955  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:27.726099  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:27.731109  232267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 21:19:28.052292  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:28.072317  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:28.072539  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:28.226212  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1013 21:19:28.275259  232267 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:19:28.275304  232267 retry.go:31] will retry after 4.658291441s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1013 21:19:28.461815  232267 node_ready.go:57] node "addons-143775" has "Ready":"False" status (will retry)
	I1013 21:19:28.552606  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:28.572362  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:28.572494  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:28.725301  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:29.052192  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:29.071970  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:29.072503  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:29.225687  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:29.552636  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:29.572326  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:29.572530  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:29.725560  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:30.052769  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:30.072483  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:30.072685  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:30.225266  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:30.552361  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:30.572116  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:30.572265  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:30.726344  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1013 21:19:30.961058  232267 node_ready.go:57] node "addons-143775" has "Ready":"False" status (will retry)
	I1013 21:19:31.052978  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:31.072568  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:31.072822  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:31.225681  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:31.552447  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:31.572394  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:31.572452  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:31.725430  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:32.052772  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:32.072397  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:32.072566  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:32.225454  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:32.552475  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:32.572467  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:32.572596  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:32.725541  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:32.933757  232267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1013 21:19:32.961140  232267 node_ready.go:57] node "addons-143775" has "Ready":"False" status (will retry)
	I1013 21:19:33.052115  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:33.072066  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:33.072549  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:33.225591  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1013 21:19:33.482681  232267 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:19:33.482713  232267 retry.go:31] will retry after 9.570443732s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:19:33.552672  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:33.572306  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:33.572484  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:33.725239  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:34.053148  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:34.072195  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:34.072617  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:34.225531  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:34.552019  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:34.572742  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:34.572794  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:34.725548  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:35.052394  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:35.071815  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:35.072045  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:35.225848  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1013 21:19:35.460645  232267 node_ready.go:57] node "addons-143775" has "Ready":"False" status (will retry)
	I1013 21:19:35.552890  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:35.573844  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:35.574091  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:35.725635  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:36.053173  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:36.071962  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:36.072350  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:36.225287  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:36.552506  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:36.572299  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:36.572435  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:36.726366  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:37.052274  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:37.071877  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:37.072028  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:37.225791  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1013 21:19:37.460971  232267 node_ready.go:57] node "addons-143775" has "Ready":"False" status (will retry)
	I1013 21:19:37.553316  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:37.572217  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:37.572419  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:37.726321  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:38.052575  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:38.072368  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:38.072534  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:38.225387  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:38.552242  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:38.572279  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:38.572330  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:38.726203  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:39.052781  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:39.072442  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:39.072567  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:39.225531  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:39.552179  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:39.572224  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:39.572556  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:39.725553  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1013 21:19:39.960314  232267 node_ready.go:57] node "addons-143775" has "Ready":"False" status (will retry)
	I1013 21:19:40.052148  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:40.072170  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:40.072657  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:40.225378  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:40.552715  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:40.572469  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:40.572739  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:40.725333  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:41.051820  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:41.072752  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:41.072839  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:41.225825  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:41.552821  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:41.575029  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:41.575119  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:41.726216  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1013 21:19:41.961389  232267 node_ready.go:57] node "addons-143775" has "Ready":"False" status (will retry)
	I1013 21:19:42.052399  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:42.072036  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:42.072149  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:42.226147  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:42.551803  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:42.572667  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:42.572840  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:42.725665  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:43.052363  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:43.053420  232267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 21:19:43.072832  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:43.072863  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:43.225671  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:43.552085  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:43.572624  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:43.572820  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1013 21:19:43.602491  232267 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:19:43.602526  232267 retry.go:31] will retry after 6.263252627s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:19:43.725477  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:44.052672  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:44.072436  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:44.072613  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:44.225500  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1013 21:19:44.460302  232267 node_ready.go:57] node "addons-143775" has "Ready":"False" status (will retry)
	I1013 21:19:44.552468  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:44.572450  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:44.572617  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:44.725584  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:45.052039  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:45.072900  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:45.072966  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:45.225818  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:45.552914  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:45.572752  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:45.572939  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:45.726410  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:46.052724  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:46.072774  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:46.072839  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:46.225582  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1013 21:19:46.460524  232267 node_ready.go:57] node "addons-143775" has "Ready":"False" status (will retry)
	I1013 21:19:46.552542  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:46.572519  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:46.572699  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:46.725527  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:47.052582  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:47.072101  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:47.072541  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:47.225624  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:47.552788  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:47.572681  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:47.572859  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:47.726252  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:48.052774  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:48.072460  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:48.072565  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:48.225223  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:48.552218  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:48.572454  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:48.572950  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:48.725873  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1013 21:19:48.961129  232267 node_ready.go:57] node "addons-143775" has "Ready":"False" status (will retry)
	I1013 21:19:49.052940  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:49.072704  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:49.072912  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:49.225716  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:49.552542  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:49.572632  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:49.572720  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:49.725548  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:49.866790  232267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 21:19:50.052859  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:50.073092  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:50.073261  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:50.225986  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1013 21:19:50.417637  232267 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:19:50.417676  232267 retry.go:31] will retry after 15.780847337s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:19:50.552751  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:50.572860  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:50.573111  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:50.726089  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:51.052477  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:51.072242  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:51.072488  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:51.225378  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1013 21:19:51.460354  232267 node_ready.go:57] node "addons-143775" has "Ready":"False" status (will retry)
	I1013 21:19:51.552498  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:51.572431  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:51.572589  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:51.725531  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:52.052406  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:52.072115  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:52.072307  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:52.226074  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:52.552206  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:52.572387  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:52.572731  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:52.725952  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:53.052427  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:53.072373  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:53.072563  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:53.225269  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1013 21:19:53.461219  232267 node_ready.go:57] node "addons-143775" has "Ready":"False" status (will retry)
	I1013 21:19:53.552366  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:53.572250  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:53.572265  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:53.726099  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:54.054112  232267 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1013 21:19:54.054139  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:54.075681  232267 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1013 21:19:54.075708  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:54.075745  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:54.225633  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:54.461676  232267 node_ready.go:49] node "addons-143775" is "Ready"
	I1013 21:19:54.461718  232267 node_ready.go:38] duration metric: took 41.504300426s for node "addons-143775" to be "Ready" ...
	I1013 21:19:54.461738  232267 api_server.go:52] waiting for apiserver process to appear ...
	I1013 21:19:54.461794  232267 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 21:19:54.488405  232267 api_server.go:72] duration metric: took 42.100831976s to wait for apiserver process to appear ...
	I1013 21:19:54.488436  232267 api_server.go:88] waiting for apiserver healthz status ...
	I1013 21:19:54.488462  232267 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1013 21:19:54.493727  232267 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1013 21:19:54.494880  232267 api_server.go:141] control plane version: v1.34.1
	I1013 21:19:54.494914  232267 api_server.go:131] duration metric: took 6.469739ms to wait for apiserver health ...
	I1013 21:19:54.494927  232267 system_pods.go:43] waiting for kube-system pods to appear ...
	I1013 21:19:54.499750  232267 system_pods.go:59] 20 kube-system pods found
	I1013 21:19:54.499800  232267 system_pods.go:61] "amd-gpu-device-plugin-ppkwz" [7266410e-a8ea-4a69-8452-d90353368f92] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1013 21:19:54.499812  232267 system_pods.go:61] "coredns-66bc5c9577-hrwcq" [25a3dd55-7f83-415b-883a-46d48cf47a9c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 21:19:54.499831  232267 system_pods.go:61] "csi-hostpath-attacher-0" [2c4ef937-534b-4fd4-951d-2703e4e2786e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1013 21:19:54.499840  232267 system_pods.go:61] "csi-hostpath-resizer-0" [0d58f1c4-8cf5-44e2-9ebb-84453ddf9e1d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1013 21:19:54.499881  232267 system_pods.go:61] "csi-hostpathplugin-74gj5" [b0f7623d-c8bb-49e5-bbee-49d50b562724] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1013 21:19:54.499896  232267 system_pods.go:61] "etcd-addons-143775" [a29bb28e-fc01-422c-88d4-8a069ab9d9be] Running
	I1013 21:19:54.499902  232267 system_pods.go:61] "kindnet-gxtvs" [0b8a4ec7-d20b-49ab-b757-1c532a3b04b6] Running
	I1013 21:19:54.499908  232267 system_pods.go:61] "kube-apiserver-addons-143775" [7701b603-3704-401f-98ec-746b84d0cbbf] Running
	I1013 21:19:54.499913  232267 system_pods.go:61] "kube-controller-manager-addons-143775" [78f6b439-7ab5-4af7-8223-92ea1d5429ea] Running
	I1013 21:19:54.499922  232267 system_pods.go:61] "kube-ingress-dns-minikube" [9cf35d01-b1fa-44a9-9bc8-5ad60442d705] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1013 21:19:54.499928  232267 system_pods.go:61] "kube-proxy-m55cq" [208146d5-8de3-4b99-89b8-5976fed1698a] Running
	I1013 21:19:54.499935  232267 system_pods.go:61] "kube-scheduler-addons-143775" [43c7c683-19ef-4140-80b7-7178150968ba] Running
	I1013 21:19:54.499943  232267 system_pods.go:61] "metrics-server-85b7d694d7-vdzpz" [cbad5626-3368-443c-8b1f-db21133a333c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1013 21:19:54.499953  232267 system_pods.go:61] "nvidia-device-plugin-daemonset-dncl2" [20aff2ff-0ccf-43d1-b425-3353c5b46b49] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1013 21:19:54.499962  232267 system_pods.go:61] "registry-6b586f9694-h4pdt" [db159b01-5db6-4300-85e5-55d60d08480c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1013 21:19:54.499976  232267 system_pods.go:61] "registry-creds-764b6fb674-skkk5" [a746932d-4fa8-46a2-96bc-caf52484966b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1013 21:19:54.499985  232267 system_pods.go:61] "registry-proxy-rrhdd" [0cf00a49-8dae-4bc0-9c48-21b177af9830] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1013 21:19:54.500020  232267 system_pods.go:61] "snapshot-controller-7d9fbc56b8-kkj6s" [9173a351-657d-4cb7-877d-b296af6af1b7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1013 21:19:54.500030  232267 system_pods.go:61] "snapshot-controller-7d9fbc56b8-zv74f" [b42d7359-8e90-4235-93a0-3b7f08e15fb7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1013 21:19:54.500038  232267 system_pods.go:61] "storage-provisioner" [c8665e3d-cb2f-41f7-8478-0156acdcc178] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 21:19:54.500054  232267 system_pods.go:74] duration metric: took 5.115632ms to wait for pod list to return data ...
	I1013 21:19:54.500077  232267 default_sa.go:34] waiting for default service account to be created ...
	I1013 21:19:54.502920  232267 default_sa.go:45] found service account: "default"
	I1013 21:19:54.502950  232267 default_sa.go:55] duration metric: took 2.861101ms for default service account to be created ...
	I1013 21:19:54.502966  232267 system_pods.go:116] waiting for k8s-apps to be running ...
	I1013 21:19:54.599561  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:54.599595  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:54.599708  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:54.600730  232267 system_pods.go:86] 20 kube-system pods found
	I1013 21:19:54.600773  232267 system_pods.go:89] "amd-gpu-device-plugin-ppkwz" [7266410e-a8ea-4a69-8452-d90353368f92] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1013 21:19:54.600785  232267 system_pods.go:89] "coredns-66bc5c9577-hrwcq" [25a3dd55-7f83-415b-883a-46d48cf47a9c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 21:19:54.600794  232267 system_pods.go:89] "csi-hostpath-attacher-0" [2c4ef937-534b-4fd4-951d-2703e4e2786e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1013 21:19:54.600800  232267 system_pods.go:89] "csi-hostpath-resizer-0" [0d58f1c4-8cf5-44e2-9ebb-84453ddf9e1d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1013 21:19:54.600806  232267 system_pods.go:89] "csi-hostpathplugin-74gj5" [b0f7623d-c8bb-49e5-bbee-49d50b562724] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1013 21:19:54.600810  232267 system_pods.go:89] "etcd-addons-143775" [a29bb28e-fc01-422c-88d4-8a069ab9d9be] Running
	I1013 21:19:54.600815  232267 system_pods.go:89] "kindnet-gxtvs" [0b8a4ec7-d20b-49ab-b757-1c532a3b04b6] Running
	I1013 21:19:54.600822  232267 system_pods.go:89] "kube-apiserver-addons-143775" [7701b603-3704-401f-98ec-746b84d0cbbf] Running
	I1013 21:19:54.600826  232267 system_pods.go:89] "kube-controller-manager-addons-143775" [78f6b439-7ab5-4af7-8223-92ea1d5429ea] Running
	I1013 21:19:54.600831  232267 system_pods.go:89] "kube-ingress-dns-minikube" [9cf35d01-b1fa-44a9-9bc8-5ad60442d705] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1013 21:19:54.600834  232267 system_pods.go:89] "kube-proxy-m55cq" [208146d5-8de3-4b99-89b8-5976fed1698a] Running
	I1013 21:19:54.600838  232267 system_pods.go:89] "kube-scheduler-addons-143775" [43c7c683-19ef-4140-80b7-7178150968ba] Running
	I1013 21:19:54.600844  232267 system_pods.go:89] "metrics-server-85b7d694d7-vdzpz" [cbad5626-3368-443c-8b1f-db21133a333c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1013 21:19:54.600853  232267 system_pods.go:89] "nvidia-device-plugin-daemonset-dncl2" [20aff2ff-0ccf-43d1-b425-3353c5b46b49] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1013 21:19:54.600859  232267 system_pods.go:89] "registry-6b586f9694-h4pdt" [db159b01-5db6-4300-85e5-55d60d08480c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1013 21:19:54.600866  232267 system_pods.go:89] "registry-creds-764b6fb674-skkk5" [a746932d-4fa8-46a2-96bc-caf52484966b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1013 21:19:54.600872  232267 system_pods.go:89] "registry-proxy-rrhdd" [0cf00a49-8dae-4bc0-9c48-21b177af9830] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1013 21:19:54.600878  232267 system_pods.go:89] "snapshot-controller-7d9fbc56b8-kkj6s" [9173a351-657d-4cb7-877d-b296af6af1b7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1013 21:19:54.600885  232267 system_pods.go:89] "snapshot-controller-7d9fbc56b8-zv74f" [b42d7359-8e90-4235-93a0-3b7f08e15fb7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1013 21:19:54.600892  232267 system_pods.go:89] "storage-provisioner" [c8665e3d-cb2f-41f7-8478-0156acdcc178] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 21:19:54.600909  232267 retry.go:31] will retry after 216.411369ms: missing components: kube-dns
	I1013 21:19:54.726004  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:54.823174  232267 system_pods.go:86] 20 kube-system pods found
	I1013 21:19:54.823215  232267 system_pods.go:89] "amd-gpu-device-plugin-ppkwz" [7266410e-a8ea-4a69-8452-d90353368f92] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1013 21:19:54.823226  232267 system_pods.go:89] "coredns-66bc5c9577-hrwcq" [25a3dd55-7f83-415b-883a-46d48cf47a9c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 21:19:54.823237  232267 system_pods.go:89] "csi-hostpath-attacher-0" [2c4ef937-534b-4fd4-951d-2703e4e2786e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1013 21:19:54.823246  232267 system_pods.go:89] "csi-hostpath-resizer-0" [0d58f1c4-8cf5-44e2-9ebb-84453ddf9e1d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1013 21:19:54.823255  232267 system_pods.go:89] "csi-hostpathplugin-74gj5" [b0f7623d-c8bb-49e5-bbee-49d50b562724] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1013 21:19:54.823261  232267 system_pods.go:89] "etcd-addons-143775" [a29bb28e-fc01-422c-88d4-8a069ab9d9be] Running
	I1013 21:19:54.823267  232267 system_pods.go:89] "kindnet-gxtvs" [0b8a4ec7-d20b-49ab-b757-1c532a3b04b6] Running
	I1013 21:19:54.823273  232267 system_pods.go:89] "kube-apiserver-addons-143775" [7701b603-3704-401f-98ec-746b84d0cbbf] Running
	I1013 21:19:54.823279  232267 system_pods.go:89] "kube-controller-manager-addons-143775" [78f6b439-7ab5-4af7-8223-92ea1d5429ea] Running
	I1013 21:19:54.823287  232267 system_pods.go:89] "kube-ingress-dns-minikube" [9cf35d01-b1fa-44a9-9bc8-5ad60442d705] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1013 21:19:54.823293  232267 system_pods.go:89] "kube-proxy-m55cq" [208146d5-8de3-4b99-89b8-5976fed1698a] Running
	I1013 21:19:54.823299  232267 system_pods.go:89] "kube-scheduler-addons-143775" [43c7c683-19ef-4140-80b7-7178150968ba] Running
	I1013 21:19:54.823306  232267 system_pods.go:89] "metrics-server-85b7d694d7-vdzpz" [cbad5626-3368-443c-8b1f-db21133a333c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1013 21:19:54.823315  232267 system_pods.go:89] "nvidia-device-plugin-daemonset-dncl2" [20aff2ff-0ccf-43d1-b425-3353c5b46b49] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1013 21:19:54.823323  232267 system_pods.go:89] "registry-6b586f9694-h4pdt" [db159b01-5db6-4300-85e5-55d60d08480c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1013 21:19:54.823330  232267 system_pods.go:89] "registry-creds-764b6fb674-skkk5" [a746932d-4fa8-46a2-96bc-caf52484966b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1013 21:19:54.823338  232267 system_pods.go:89] "registry-proxy-rrhdd" [0cf00a49-8dae-4bc0-9c48-21b177af9830] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1013 21:19:54.823352  232267 system_pods.go:89] "snapshot-controller-7d9fbc56b8-kkj6s" [9173a351-657d-4cb7-877d-b296af6af1b7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1013 21:19:54.823365  232267 system_pods.go:89] "snapshot-controller-7d9fbc56b8-zv74f" [b42d7359-8e90-4235-93a0-3b7f08e15fb7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1013 21:19:54.823376  232267 system_pods.go:89] "storage-provisioner" [c8665e3d-cb2f-41f7-8478-0156acdcc178] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 21:19:54.823399  232267 retry.go:31] will retry after 251.942092ms: missing components: kube-dns
	I1013 21:19:55.053202  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:55.072261  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:55.072818  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:55.079834  232267 system_pods.go:86] 20 kube-system pods found
	I1013 21:19:55.079873  232267 system_pods.go:89] "amd-gpu-device-plugin-ppkwz" [7266410e-a8ea-4a69-8452-d90353368f92] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1013 21:19:55.079882  232267 system_pods.go:89] "coredns-66bc5c9577-hrwcq" [25a3dd55-7f83-415b-883a-46d48cf47a9c] Running
	I1013 21:19:55.079895  232267 system_pods.go:89] "csi-hostpath-attacher-0" [2c4ef937-534b-4fd4-951d-2703e4e2786e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1013 21:19:55.079906  232267 system_pods.go:89] "csi-hostpath-resizer-0" [0d58f1c4-8cf5-44e2-9ebb-84453ddf9e1d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1013 21:19:55.079916  232267 system_pods.go:89] "csi-hostpathplugin-74gj5" [b0f7623d-c8bb-49e5-bbee-49d50b562724] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1013 21:19:55.079924  232267 system_pods.go:89] "etcd-addons-143775" [a29bb28e-fc01-422c-88d4-8a069ab9d9be] Running
	I1013 21:19:55.079931  232267 system_pods.go:89] "kindnet-gxtvs" [0b8a4ec7-d20b-49ab-b757-1c532a3b04b6] Running
	I1013 21:19:55.079939  232267 system_pods.go:89] "kube-apiserver-addons-143775" [7701b603-3704-401f-98ec-746b84d0cbbf] Running
	I1013 21:19:55.079946  232267 system_pods.go:89] "kube-controller-manager-addons-143775" [78f6b439-7ab5-4af7-8223-92ea1d5429ea] Running
	I1013 21:19:55.079958  232267 system_pods.go:89] "kube-ingress-dns-minikube" [9cf35d01-b1fa-44a9-9bc8-5ad60442d705] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1013 21:19:55.079968  232267 system_pods.go:89] "kube-proxy-m55cq" [208146d5-8de3-4b99-89b8-5976fed1698a] Running
	I1013 21:19:55.079975  232267 system_pods.go:89] "kube-scheduler-addons-143775" [43c7c683-19ef-4140-80b7-7178150968ba] Running
	I1013 21:19:55.079986  232267 system_pods.go:89] "metrics-server-85b7d694d7-vdzpz" [cbad5626-3368-443c-8b1f-db21133a333c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1013 21:19:55.080015  232267 system_pods.go:89] "nvidia-device-plugin-daemonset-dncl2" [20aff2ff-0ccf-43d1-b425-3353c5b46b49] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1013 21:19:55.080028  232267 system_pods.go:89] "registry-6b586f9694-h4pdt" [db159b01-5db6-4300-85e5-55d60d08480c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1013 21:19:55.080036  232267 system_pods.go:89] "registry-creds-764b6fb674-skkk5" [a746932d-4fa8-46a2-96bc-caf52484966b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1013 21:19:55.080046  232267 system_pods.go:89] "registry-proxy-rrhdd" [0cf00a49-8dae-4bc0-9c48-21b177af9830] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1013 21:19:55.080057  232267 system_pods.go:89] "snapshot-controller-7d9fbc56b8-kkj6s" [9173a351-657d-4cb7-877d-b296af6af1b7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1013 21:19:55.080065  232267 system_pods.go:89] "snapshot-controller-7d9fbc56b8-zv74f" [b42d7359-8e90-4235-93a0-3b7f08e15fb7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1013 21:19:55.080080  232267 system_pods.go:89] "storage-provisioner" [c8665e3d-cb2f-41f7-8478-0156acdcc178] Running
	I1013 21:19:55.080096  232267 system_pods.go:126] duration metric: took 577.123161ms to wait for k8s-apps to be running ...
	I1013 21:19:55.080108  232267 system_svc.go:44] waiting for kubelet service to be running ....
	I1013 21:19:55.080165  232267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 21:19:55.099386  232267 system_svc.go:56] duration metric: took 19.266854ms WaitForService to wait for kubelet
	I1013 21:19:55.099423  232267 kubeadm.go:586] duration metric: took 42.711856848s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 21:19:55.099453  232267 node_conditions.go:102] verifying NodePressure condition ...
	I1013 21:19:55.103285  232267 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1013 21:19:55.103321  232267 node_conditions.go:123] node cpu capacity is 8
	I1013 21:19:55.103356  232267 node_conditions.go:105] duration metric: took 3.896171ms to run NodePressure ...
	I1013 21:19:55.103372  232267 start.go:241] waiting for startup goroutines ...
	I1013 21:19:55.226271  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:55.553436  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:55.572447  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:55.572758  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:55.726025  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:56.053078  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:56.073254  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:56.073301  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:56.226321  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:56.552981  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:56.573300  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:56.573306  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:56.726183  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:57.053036  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:57.072597  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:57.072788  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:57.225502  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:57.553035  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:57.573096  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:57.573223  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:57.726378  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:58.054845  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:58.073421  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:58.073451  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:58.226834  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:58.553689  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:58.572945  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:58.573092  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:58.726341  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:59.053610  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:59.072685  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:59.072701  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:59.226136  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:59.552434  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:59.572447  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:59.572520  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:59.725292  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:00.052960  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:00.073055  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:00.073176  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:00.226118  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:00.552477  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:00.573665  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:00.573842  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:00.725836  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:01.052778  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:01.073158  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:01.073200  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:01.226592  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:01.553493  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:01.572593  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:01.572702  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:01.725808  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:02.052437  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:02.072522  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:02.152752  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:02.225725  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:02.552607  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:02.573796  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:02.573827  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:02.726494  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:03.053712  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:03.073103  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:03.073200  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:03.226699  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:03.553873  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:03.573047  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:03.573096  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:03.726111  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:04.052414  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:04.072442  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:04.072532  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:04.226020  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:04.551917  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:04.573359  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:04.573489  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:04.819561  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:05.053259  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:05.072483  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:05.072724  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:05.225403  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:05.552802  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:05.572922  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:05.572922  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:05.726607  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:06.053144  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:06.072775  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:06.072868  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:06.198888  232267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 21:20:06.227127  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:06.552732  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:06.573152  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:06.573166  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:06.726538  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1013 21:20:06.842531  232267 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:20:06.842573  232267 retry.go:31] will retry after 30.83180354s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:20:07.053222  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:07.072223  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:07.072383  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:07.224959  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:07.552523  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:07.572335  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:07.572403  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:07.726272  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:08.073084  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:08.073096  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:08.073214  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:08.226351  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:08.552836  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:08.573100  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:08.573100  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:08.727102  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:09.052838  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:09.072910  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:09.073023  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:09.226464  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:09.553457  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:09.572447  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:09.572544  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:09.725705  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:10.098200  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:10.098307  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:10.098385  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:10.226218  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:10.553027  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:10.573316  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:10.573737  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:10.726454  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:11.053268  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:11.072447  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:11.072552  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:11.225751  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:11.552955  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:11.573121  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:11.573227  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:11.726432  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:12.052960  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:12.072636  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:12.072700  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:12.226285  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:12.553530  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:12.572760  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:12.572967  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:12.726613  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:13.053145  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:13.154361  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:13.154416  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:13.254763  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:13.553264  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:13.572363  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:13.572668  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:13.725930  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:14.053087  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:14.073203  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:14.073498  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:14.226035  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:14.552956  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:14.573027  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:14.573128  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:14.726098  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:15.052219  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:15.072706  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:15.072731  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:15.225808  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:15.553451  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:15.572633  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:15.572714  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:15.725680  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:16.110172  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:16.110934  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:16.111247  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:16.334850  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:16.553905  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:16.573829  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:16.574135  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:16.727444  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:17.056809  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:17.077328  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:17.077931  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:17.227729  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:17.552965  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:17.574429  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:17.574442  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:17.726094  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:18.052719  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:18.073307  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:18.073751  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:18.227388  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:18.553535  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:18.572635  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:18.573011  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:18.726985  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:19.053230  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:19.073500  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:19.073554  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:19.225478  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:19.552878  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:19.573453  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:19.573497  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:19.725504  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:20.053544  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:20.072492  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:20.072690  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:20.225979  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:20.552640  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:20.572912  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:20.573072  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:20.726068  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:21.052964  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:21.073191  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:21.073350  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:21.226717  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:21.552614  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:21.573208  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:21.573400  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:21.726432  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:22.053674  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:22.074070  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:22.075892  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:22.225823  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:22.552416  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:22.572488  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:22.572488  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:22.725630  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:23.052917  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:23.072591  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:23.072661  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:23.225904  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:23.553408  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:23.573971  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:23.574033  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:23.726084  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:24.052804  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:24.072450  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:24.072791  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:24.225714  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:24.554134  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:24.575701  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:24.575919  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:24.727061  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:25.058795  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:25.267678  232267 kapi.go:107] duration metric: took 1m11.198148764s to wait for kubernetes.io/minikube-addons=registry ...
	I1013 21:20:25.267810  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:25.268452  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:25.552838  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:25.572718  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:25.725664  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:26.053578  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:26.072904  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:26.226170  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:26.553708  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:26.572832  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:26.726087  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:27.052517  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:27.072556  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:27.226749  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:27.553714  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:27.572955  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:27.725582  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:28.053850  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:28.073623  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:28.226065  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:28.553218  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:28.573947  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:28.726274  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:29.053293  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:29.153666  232267 kapi.go:107] duration metric: took 1m15.084743515s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1013 21:20:29.225557  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:29.552625  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:29.725863  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:30.053011  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:30.227711  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:30.553588  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:30.725907  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:31.087781  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:31.225439  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:31.553117  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:31.726603  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:32.052687  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:32.226603  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:32.553080  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:32.726220  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:33.052887  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:33.226937  232267 kapi.go:107] duration metric: took 1m12.504562068s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1013 21:20:33.229120  232267 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-143775 cluster.
	I1013 21:20:33.231018  232267 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1013 21:20:33.232908  232267 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1013 21:20:33.552716  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:34.052908  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:34.552583  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:35.053785  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:35.553715  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:36.053298  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:36.552534  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:37.053017  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:37.552545  232267 kapi.go:107] duration metric: took 1m23.003577776s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1013 21:20:37.674622  232267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1013 21:20:38.217361  232267 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:20:38.217397  232267 retry.go:31] will retry after 21.710673613s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:20:59.929058  232267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1013 21:21:00.466508  232267 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1013 21:21:00.466651  232267 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1013 21:21:00.468960  232267 out.go:179] * Enabled addons: amd-gpu-device-plugin, ingress-dns, registry-creds, storage-provisioner, default-storageclass, nvidia-device-plugin, cloud-spanner, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1013 21:21:00.470144  232267 addons.go:514] duration metric: took 1m48.082551271s for enable addons: enabled=[amd-gpu-device-plugin ingress-dns registry-creds storage-provisioner default-storageclass nvidia-device-plugin cloud-spanner metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1013 21:21:00.470188  232267 start.go:246] waiting for cluster config update ...
	I1013 21:21:00.470214  232267 start.go:255] writing updated cluster config ...
	I1013 21:21:00.470510  232267 ssh_runner.go:195] Run: rm -f paused
	I1013 21:21:00.474646  232267 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 21:21:00.478738  232267 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-hrwcq" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 21:21:00.483233  232267 pod_ready.go:94] pod "coredns-66bc5c9577-hrwcq" is "Ready"
	I1013 21:21:00.483259  232267 pod_ready.go:86] duration metric: took 4.496946ms for pod "coredns-66bc5c9577-hrwcq" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 21:21:00.485451  232267 pod_ready.go:83] waiting for pod "etcd-addons-143775" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 21:21:00.489276  232267 pod_ready.go:94] pod "etcd-addons-143775" is "Ready"
	I1013 21:21:00.489299  232267 pod_ready.go:86] duration metric: took 3.830168ms for pod "etcd-addons-143775" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 21:21:00.491222  232267 pod_ready.go:83] waiting for pod "kube-apiserver-addons-143775" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 21:21:00.494683  232267 pod_ready.go:94] pod "kube-apiserver-addons-143775" is "Ready"
	I1013 21:21:00.494702  232267 pod_ready.go:86] duration metric: took 3.461071ms for pod "kube-apiserver-addons-143775" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 21:21:00.496584  232267 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-143775" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 21:21:00.879089  232267 pod_ready.go:94] pod "kube-controller-manager-addons-143775" is "Ready"
	I1013 21:21:00.879123  232267 pod_ready.go:86] duration metric: took 382.522118ms for pod "kube-controller-manager-addons-143775" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 21:21:01.078960  232267 pod_ready.go:83] waiting for pod "kube-proxy-m55cq" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 21:21:01.478369  232267 pod_ready.go:94] pod "kube-proxy-m55cq" is "Ready"
	I1013 21:21:01.478397  232267 pod_ready.go:86] duration metric: took 399.409914ms for pod "kube-proxy-m55cq" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 21:21:01.678701  232267 pod_ready.go:83] waiting for pod "kube-scheduler-addons-143775" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 21:21:02.079364  232267 pod_ready.go:94] pod "kube-scheduler-addons-143775" is "Ready"
	I1013 21:21:02.079399  232267 pod_ready.go:86] duration metric: took 400.668182ms for pod "kube-scheduler-addons-143775" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 21:21:02.079411  232267 pod_ready.go:40] duration metric: took 1.60473781s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 21:21:02.125252  232267 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1013 21:21:02.127556  232267 out.go:179] * Done! kubectl is now configured to use "addons-143775" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 13 21:22:10 addons-143775 crio[778]: time="2025-10-13T21:22:10.96147983Z" level=info msg="Creating container: kube-system/registry-creds-764b6fb674-skkk5/registry-creds" id=18e309a7-5c8e-4561-b803-28ce96b54636 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 21:22:10 addons-143775 crio[778]: time="2025-10-13T21:22:10.962284114Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 21:22:10 addons-143775 crio[778]: time="2025-10-13T21:22:10.967664736Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 21:22:10 addons-143775 crio[778]: time="2025-10-13T21:22:10.968202361Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 21:22:10 addons-143775 crio[778]: time="2025-10-13T21:22:10.998426385Z" level=info msg="Created container 95cc4a1bf7c6c190d77c5ca695c3d04c0d6d2e1e3d0e18f4626219f353f0775c: kube-system/registry-creds-764b6fb674-skkk5/registry-creds" id=18e309a7-5c8e-4561-b803-28ce96b54636 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 21:22:10 addons-143775 crio[778]: time="2025-10-13T21:22:10.999076266Z" level=info msg="Starting container: 95cc4a1bf7c6c190d77c5ca695c3d04c0d6d2e1e3d0e18f4626219f353f0775c" id=e30cf7f5-c51b-4ae1-bd88-f0f18eceb70d name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 21:22:11 addons-143775 crio[778]: time="2025-10-13T21:22:11.000932398Z" level=info msg="Started container" PID=9034 containerID=95cc4a1bf7c6c190d77c5ca695c3d04c0d6d2e1e3d0e18f4626219f353f0775c description=kube-system/registry-creds-764b6fb674-skkk5/registry-creds id=e30cf7f5-c51b-4ae1-bd88-f0f18eceb70d name=/runtime.v1.RuntimeService/StartContainer sandboxID=afbbf05b8388d2d869e1d799a99fa0f6950f3fcce2770276c407814671b20c54
	Oct 13 21:23:06 addons-143775 crio[778]: time="2025-10-13T21:23:06.823957776Z" level=info msg="Stopping pod sandbox: 69da0d02551a359198d626f06175ea1950f709201fffe2d770839dc6ce202a7f" id=95f471e4-ae81-4834-85dd-14d5403bf2bf name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 13 21:23:06 addons-143775 crio[778]: time="2025-10-13T21:23:06.824044302Z" level=info msg="Stopped pod sandbox (already stopped): 69da0d02551a359198d626f06175ea1950f709201fffe2d770839dc6ce202a7f" id=95f471e4-ae81-4834-85dd-14d5403bf2bf name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 13 21:23:06 addons-143775 crio[778]: time="2025-10-13T21:23:06.824333379Z" level=info msg="Removing pod sandbox: 69da0d02551a359198d626f06175ea1950f709201fffe2d770839dc6ce202a7f" id=77010efa-1962-453d-ab4c-93037330f5f8 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 13 21:23:06 addons-143775 crio[778]: time="2025-10-13T21:23:06.828290469Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 13 21:23:06 addons-143775 crio[778]: time="2025-10-13T21:23:06.828347755Z" level=info msg="Removed pod sandbox: 69da0d02551a359198d626f06175ea1950f709201fffe2d770839dc6ce202a7f" id=77010efa-1962-453d-ab4c-93037330f5f8 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 13 21:23:39 addons-143775 crio[778]: time="2025-10-13T21:23:39.163373969Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-m4txs/POD" id=d982ede9-6db4-48f3-865b-fbd053ef5670 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 13 21:23:39 addons-143775 crio[778]: time="2025-10-13T21:23:39.163484153Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 21:23:39 addons-143775 crio[778]: time="2025-10-13T21:23:39.169916247Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-m4txs Namespace:default ID:2bdb992f67dd11be7fd35042a5d3108b11bb47eb68a5deea19d9979dc2b140cb UID:c555e683-cb56-4113-8c62-36261f004cfd NetNS:/var/run/netns/f96792c8-a9ef-4936-b494-1df1e08ea687 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0007ed388}] Aliases:map[]}"
	Oct 13 21:23:39 addons-143775 crio[778]: time="2025-10-13T21:23:39.16994641Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-m4txs to CNI network \"kindnet\" (type=ptp)"
	Oct 13 21:23:39 addons-143775 crio[778]: time="2025-10-13T21:23:39.180586166Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-m4txs Namespace:default ID:2bdb992f67dd11be7fd35042a5d3108b11bb47eb68a5deea19d9979dc2b140cb UID:c555e683-cb56-4113-8c62-36261f004cfd NetNS:/var/run/netns/f96792c8-a9ef-4936-b494-1df1e08ea687 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0007ed388}] Aliases:map[]}"
	Oct 13 21:23:39 addons-143775 crio[778]: time="2025-10-13T21:23:39.180745992Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-m4txs for CNI network kindnet (type=ptp)"
	Oct 13 21:23:39 addons-143775 crio[778]: time="2025-10-13T21:23:39.181800912Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 13 21:23:39 addons-143775 crio[778]: time="2025-10-13T21:23:39.183087429Z" level=info msg="Ran pod sandbox 2bdb992f67dd11be7fd35042a5d3108b11bb47eb68a5deea19d9979dc2b140cb with infra container: default/hello-world-app-5d498dc89-m4txs/POD" id=d982ede9-6db4-48f3-865b-fbd053ef5670 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 13 21:23:39 addons-143775 crio[778]: time="2025-10-13T21:23:39.184446102Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=cb4f0da0-6dd3-4851-9b8e-8650235d244b name=/runtime.v1.ImageService/ImageStatus
	Oct 13 21:23:39 addons-143775 crio[778]: time="2025-10-13T21:23:39.184584619Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=cb4f0da0-6dd3-4851-9b8e-8650235d244b name=/runtime.v1.ImageService/ImageStatus
	Oct 13 21:23:39 addons-143775 crio[778]: time="2025-10-13T21:23:39.184625031Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=cb4f0da0-6dd3-4851-9b8e-8650235d244b name=/runtime.v1.ImageService/ImageStatus
	Oct 13 21:23:39 addons-143775 crio[778]: time="2025-10-13T21:23:39.185357466Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=6bb2b420-4ef4-42c7-a910-b014fae6bc15 name=/runtime.v1.ImageService/PullImage
	Oct 13 21:23:39 addons-143775 crio[778]: time="2025-10-13T21:23:39.201392441Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	95cc4a1bf7c6c       docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605                             About a minute ago   Running             registry-creds                           0                   afbbf05b8388d       registry-creds-764b6fb674-skkk5             kube-system
	e989541a0cdff       docker.io/library/nginx@sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e                                              2 minutes ago        Running             nginx                                    0                   693a3eb8e24d3       nginx                                       default
	d75436c0ee2da       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          2 minutes ago        Running             busybox                                  0                   dc9c8bb6fdec2       busybox                                     default
	33180043b49d2       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          3 minutes ago        Running             csi-snapshotter                          0                   24ea6c7f92445       csi-hostpathplugin-74gj5                    kube-system
	29890b5558c66       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          3 minutes ago        Running             csi-provisioner                          0                   24ea6c7f92445       csi-hostpathplugin-74gj5                    kube-system
	0f56c52e6564a       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            3 minutes ago        Running             liveness-probe                           0                   24ea6c7f92445       csi-hostpathplugin-74gj5                    kube-system
	dfd7f05ad90ea       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           3 minutes ago        Running             hostpath                                 0                   24ea6c7f92445       csi-hostpathplugin-74gj5                    kube-system
	1ca16a8f31ca6       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 3 minutes ago        Running             gcp-auth                                 0                   6365076ed95a3       gcp-auth-78565c9fb4-drvz6                   gcp-auth
	9da9822bfa300       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb                            3 minutes ago        Running             gadget                                   0                   f16e164ebc52d       gadget-lkcrw                                gadget
	d3f41f21c86bd       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                3 minutes ago        Running             node-driver-registrar                    0                   24ea6c7f92445       csi-hostpathplugin-74gj5                    kube-system
	5621e395830f5       registry.k8s.io/ingress-nginx/controller@sha256:7b4073fc95e078d863c0b0b08deb72e01d2cf629e2156822bcd394fc2bcd8e83                             3 minutes ago        Running             controller                               0                   57b69f8a7447e       ingress-nginx-controller-675c5ddd98-cvxfz   ingress-nginx
	178e4409ca2b6       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              3 minutes ago        Running             registry-proxy                           0                   f8214b125d132       registry-proxy-rrhdd                        kube-system
	8d550cc3998c8       nvcr.io/nvidia/k8s-device-plugin@sha256:ad155f1089b64673c75b2f39258f0791cbad6d3011419726ec605196981e1c32                                     3 minutes ago        Running             nvidia-device-plugin-ctr                 0                   60add3cf45101       nvidia-device-plugin-daemonset-dncl2        kube-system
	57bd7bb06e366       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     3 minutes ago        Running             amd-gpu-device-plugin                    0                   327972e03f278       amd-gpu-device-plugin-ppkwz                 kube-system
	03f55a19579f6       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   3 minutes ago        Running             csi-external-health-monitor-controller   0                   24ea6c7f92445       csi-hostpathplugin-74gj5                    kube-system
	37d832fcb8c1f       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago        Running             volume-snapshot-controller               0                   2c16479167a4f       snapshot-controller-7d9fbc56b8-kkj6s        kube-system
	0316d05383999       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago        Running             volume-snapshot-controller               0                   349a43c34d66a       snapshot-controller-7d9fbc56b8-zv74f        kube-system
	c42f211cc6800       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   3 minutes ago        Exited              patch                                    0                   fcf974a645f96       ingress-nginx-admission-patch-nrsqr         ingress-nginx
	630a251fc66ba       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             3 minutes ago        Running             csi-attacher                             0                   2721c99f66266       csi-hostpath-attacher-0                     kube-system
	03c7460cdbd20       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        3 minutes ago        Running             metrics-server                           0                   8da65f70d0dee       metrics-server-85b7d694d7-vdzpz             kube-system
	0e9754c3036df       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              3 minutes ago        Running             csi-resizer                              0                   de6d113547ec4       csi-hostpath-resizer-0                      kube-system
	4270a9ae8a25b       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   3 minutes ago        Exited              create                                   0                   5c7a95ca85d43       ingress-nginx-admission-create-jm9d9        ingress-nginx
	fd4aee1022dce       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             3 minutes ago        Running             local-path-provisioner                   0                   627a6eee50231       local-path-provisioner-648f6765c9-6dwg5     local-path-storage
	e57df483a324f       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           3 minutes ago        Running             registry                                 0                   e3ab846f99e32       registry-6b586f9694-h4pdt                   kube-system
	a5b743f1ce5c1       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              3 minutes ago        Running             yakd                                     0                   bf88fe493eabb       yakd-dashboard-5ff678cb9-j4nvc              yakd-dashboard
	a21bb2b294cea       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               3 minutes ago        Running             minikube-ingress-dns                     0                   f530edb9dc256       kube-ingress-dns-minikube                   kube-system
	b7da2064722f5       gcr.io/cloud-spanner-emulator/emulator@sha256:66030f526b1bc41f0d2027b496fd8fa53f620bf9d5a18baa07990e67f1a20237                               3 minutes ago        Running             cloud-spanner-emulator                   0                   473c9fb727e90       cloud-spanner-emulator-86bd5cbb97-tr882     default
	278b4b7546c8c       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             3 minutes ago        Running             coredns                                  0                   cf835b56a046c       coredns-66bc5c9577-hrwcq                    kube-system
	e208e9862015d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             3 minutes ago        Running             storage-provisioner                      0                   bac617f9b937f       storage-provisioner                         kube-system
	4a3e089044a38       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             4 minutes ago        Running             kindnet-cni                              0                   1684b1b800d4a       kindnet-gxtvs                               kube-system
	ac355aa00aaae       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             4 minutes ago        Running             kube-proxy                               0                   040a74504a80b       kube-proxy-m55cq                            kube-system
	fc72bcf650d5a       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             4 minutes ago        Running             kube-apiserver                           0                   ced9bc5edb2b1       kube-apiserver-addons-143775                kube-system
	4f9c304b23eab       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             4 minutes ago        Running             etcd                                     0                   7f6253c4294cd       etcd-addons-143775                          kube-system
	c0af2973488b6       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             4 minutes ago        Running             kube-scheduler                           0                   79036bd56c3eb       kube-scheduler-addons-143775                kube-system
	6cbf217264895       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             4 minutes ago        Running             kube-controller-manager                  0                   ad6e3df91e539       kube-controller-manager-addons-143775       kube-system
	
	
	==> coredns [278b4b7546c8cbb271aa40024385c9a2115953314e4b6fd1f291d9686c2f7374] <==
	[INFO] 10.244.0.22:54153 - 39426 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.004891933s
	[INFO] 10.244.0.22:42992 - 14299 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005172396s
	[INFO] 10.244.0.22:35143 - 40165 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.006667947s
	[INFO] 10.244.0.22:54182 - 48169 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004977609s
	[INFO] 10.244.0.22:49304 - 18842 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.00722354s
	[INFO] 10.244.0.22:35535 - 44233 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002203614s
	[INFO] 10.244.0.22:60572 - 61408 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.002651003s
	[INFO] 10.244.0.25:54407 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000251205s
	[INFO] 10.244.0.25:60620 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00018567s
	[INFO] 10.244.0.31:41969 - 48024 "AAAA IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000278238s
	[INFO] 10.244.0.31:39434 - 23246 "A IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000355868s
	[INFO] 10.244.0.31:51396 - 20162 "A IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000115919s
	[INFO] 10.244.0.31:43427 - 5305 "AAAA IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.0001532s
	[INFO] 10.244.0.31:43024 - 25767 "AAAA IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000101522s
	[INFO] 10.244.0.31:45163 - 15001 "A IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.00011739s
	[INFO] 10.244.0.31:41532 - 32836 "AAAA IN accounts.google.com.local. udp 43 false 512" NXDOMAIN qr,rd,ra 43 0.004464987s
	[INFO] 10.244.0.31:41486 - 32138 "A IN accounts.google.com.local. udp 43 false 512" NXDOMAIN qr,rd,ra 43 0.005344152s
	[INFO] 10.244.0.31:49305 - 36573 "A IN accounts.google.com.us-central1-a.c.k8s-minikube.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 185 0.006253241s
	[INFO] 10.244.0.31:35051 - 17437 "AAAA IN accounts.google.com.us-central1-a.c.k8s-minikube.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 185 0.006559656s
	[INFO] 10.244.0.31:49736 - 42782 "AAAA IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.005154113s
	[INFO] 10.244.0.31:37976 - 15781 "A IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.009229734s
	[INFO] 10.244.0.31:48528 - 63174 "AAAA IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.004215768s
	[INFO] 10.244.0.31:52856 - 20165 "A IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.004613417s
	[INFO] 10.244.0.31:46378 - 40952 "AAAA IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 84 0.001735652s
	[INFO] 10.244.0.31:57762 - 5403 "A IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 72 0.001808049s
	
	
	==> describe nodes <==
	Name:               addons-143775
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-143775
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=18273cc699fc357fbf4f93654efe4966698a9f22
	                    minikube.k8s.io/name=addons-143775
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_13T21_19_07_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-143775
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-143775"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 13 Oct 2025 21:19:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-143775
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 13 Oct 2025 21:23:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 13 Oct 2025 21:22:41 +0000   Mon, 13 Oct 2025 21:19:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 13 Oct 2025 21:22:41 +0000   Mon, 13 Oct 2025 21:19:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 13 Oct 2025 21:22:41 +0000   Mon, 13 Oct 2025 21:19:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 13 Oct 2025 21:22:41 +0000   Mon, 13 Oct 2025 21:19:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-143775
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863444Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863444Ki
	  pods:               110
	System Info:
	  Machine ID:                 66f4c9907daa2bafbf78246f68ed04ba
	  System UUID:                5ac6ceea-0799-4c1e-8b09-5c6dad1bf3ad
	  Boot ID:                    981b04c4-08c1-4321-af44-0fa889f5f1d8
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (29 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m38s
	  default                     cloud-spanner-emulator-86bd5cbb97-tr882      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m27s
	  default                     hello-world-app-5d498dc89-m4txs              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m25s
	  gadget                      gadget-lkcrw                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m27s
	  gcp-auth                    gcp-auth-78565c9fb4-drvz6                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m20s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-cvxfz    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         4m26s
	  kube-system                 amd-gpu-device-plugin-ppkwz                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m46s
	  kube-system                 coredns-66bc5c9577-hrwcq                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     4m28s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m26s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m26s
	  kube-system                 csi-hostpathplugin-74gj5                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m46s
	  kube-system                 etcd-addons-143775                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         4m34s
	  kube-system                 kindnet-gxtvs                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      4m28s
	  kube-system                 kube-apiserver-addons-143775                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         4m34s
	  kube-system                 kube-controller-manager-addons-143775        200m (2%)     0 (0%)      0 (0%)           0 (0%)         4m35s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m27s
	  kube-system                 kube-proxy-m55cq                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m28s
	  kube-system                 kube-scheduler-addons-143775                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         4m35s
	  kube-system                 metrics-server-85b7d694d7-vdzpz              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         4m27s
	  kube-system                 nvidia-device-plugin-daemonset-dncl2         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m46s
	  kube-system                 registry-6b586f9694-h4pdt                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m27s
	  kube-system                 registry-creds-764b6fb674-skkk5              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m27s
	  kube-system                 registry-proxy-rrhdd                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m46s
	  kube-system                 snapshot-controller-7d9fbc56b8-kkj6s         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m26s
	  kube-system                 snapshot-controller-7d9fbc56b8-zv74f         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m26s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m27s
	  local-path-storage          local-path-provisioner-648f6765c9-6dwg5      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m27s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-j4nvc               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     4m27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m26s                  kube-proxy       
	  Normal  Starting                 4m38s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m38s (x8 over 4m38s)  kubelet          Node addons-143775 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m38s (x8 over 4m38s)  kubelet          Node addons-143775 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m38s (x8 over 4m38s)  kubelet          Node addons-143775 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m34s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m34s                  kubelet          Node addons-143775 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m34s                  kubelet          Node addons-143775 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m34s                  kubelet          Node addons-143775 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m29s                  node-controller  Node addons-143775 event: Registered Node addons-143775 in Controller
	  Normal  NodeReady                3m47s                  kubelet          Node addons-143775 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.099627] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.027042] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.612816] kauditd_printk_skb: 47 callbacks suppressed
	[Oct13 21:21] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +1.026013] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +1.023870] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +1.023931] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +1.023844] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +1.023883] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +2.047734] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +4.031606] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +8.511203] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[ +16.382178] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[Oct13 21:22] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	
	
	==> etcd [4f9c304b23eabe02395b1858ed2ac76623d0b6f4887bb6ee139a97ff4d2ea01e] <==
	{"level":"warn","ts":"2025-10-13T21:19:03.881889Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:19:03.888928Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:19:03.902496Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:19:03.909549Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:19:03.915908Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:19:03.964617Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:19:15.065445Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:19:15.072717Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:19:41.566555Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:19:41.574740Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:19:41.593148Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:19:41.600235Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57246","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-13T21:20:10.096002Z","caller":"traceutil/trace.go:172","msg":"trace[1412483323] transaction","detail":"{read_only:false; response_revision:1044; number_of_response:1; }","duration":"126.617907ms","start":"2025-10-13T21:20:09.969340Z","end":"2025-10-13T21:20:10.095958Z","steps":["trace[1412483323] 'process raft request'  (duration: 117.373687ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-13T21:20:10.223826Z","caller":"traceutil/trace.go:172","msg":"trace[112034507] transaction","detail":"{read_only:false; response_revision:1046; number_of_response:1; }","duration":"119.053308ms","start":"2025-10-13T21:20:10.104751Z","end":"2025-10-13T21:20:10.223804Z","steps":["trace[112034507] 'process raft request'  (duration: 97.050551ms)","trace[112034507] 'compare'  (duration: 21.898434ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-13T21:20:16.108251Z","caller":"traceutil/trace.go:172","msg":"trace[1288555985] transaction","detail":"{read_only:false; response_revision:1129; number_of_response:1; }","duration":"112.647003ms","start":"2025-10-13T21:20:15.995571Z","end":"2025-10-13T21:20:16.108218Z","steps":["trace[1288555985] 'process raft request'  (duration: 101.143409ms)","trace[1288555985] 'compare'  (duration: 11.306947ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-13T21:20:16.333074Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"138.591605ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/volumeattributesclasses\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-13T21:20:16.333188Z","caller":"traceutil/trace.go:172","msg":"trace[1456737397] range","detail":"{range_begin:/registry/volumeattributesclasses; range_end:; response_count:0; response_revision:1133; }","duration":"138.731306ms","start":"2025-10-13T21:20:16.194438Z","end":"2025-10-13T21:20:16.333170Z","steps":["trace[1456737397] 'agreement among raft nodes before linearized reading'  (duration: 64.699957ms)","trace[1456737397] 'range keys from in-memory index tree'  (duration: 73.857235ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-13T21:20:16.333181Z","caller":"traceutil/trace.go:172","msg":"trace[1865999962] transaction","detail":"{read_only:false; response_revision:1135; number_of_response:1; }","duration":"134.45271ms","start":"2025-10-13T21:20:16.198713Z","end":"2025-10-13T21:20:16.333166Z","steps":["trace[1865999962] 'process raft request'  (duration: 134.416234ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-13T21:20:16.333200Z","caller":"traceutil/trace.go:172","msg":"trace[1272552941] transaction","detail":"{read_only:false; response_revision:1134; number_of_response:1; }","duration":"175.928396ms","start":"2025-10-13T21:20:16.157255Z","end":"2025-10-13T21:20:16.333183Z","steps":["trace[1272552941] 'process raft request'  (duration: 101.896571ms)","trace[1272552941] 'compare'  (duration: 73.832683ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-13T21:20:16.333274Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"115.756044ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiextensions.k8s.io/customresourcedefinitions\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2025-10-13T21:20:16.333300Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"108.690863ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-13T21:20:16.333327Z","caller":"traceutil/trace.go:172","msg":"trace[973915948] range","detail":"{range_begin:/registry/apiextensions.k8s.io/customresourcedefinitions; range_end:; response_count:0; response_revision:1135; }","duration":"115.824732ms","start":"2025-10-13T21:20:16.217492Z","end":"2025-10-13T21:20:16.333317Z","steps":["trace[973915948] 'agreement among raft nodes before linearized reading'  (duration: 115.709322ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-13T21:20:16.333340Z","caller":"traceutil/trace.go:172","msg":"trace[1818805710] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1135; }","duration":"108.735262ms","start":"2025-10-13T21:20:16.224596Z","end":"2025-10-13T21:20:16.333331Z","steps":["trace[1818805710] 'agreement among raft nodes before linearized reading'  (duration: 108.666457ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-13T21:20:25.265744Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"100.742383ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/snapshot.storage.k8s.io/volumesnapshotclasses\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-13T21:20:25.265856Z","caller":"traceutil/trace.go:172","msg":"trace[534535115] range","detail":"{range_begin:/registry/snapshot.storage.k8s.io/volumesnapshotclasses; range_end:; response_count:0; response_revision:1158; }","duration":"100.872855ms","start":"2025-10-13T21:20:25.164960Z","end":"2025-10-13T21:20:25.265832Z","steps":["trace[534535115] 'range keys from in-memory index tree'  (duration: 100.617376ms)"],"step_count":1}
	
	
	==> gcp-auth [1ca16a8f31ca6b8e660253e4041382c226282d784a9c6661b7394d9464b80c6b] <==
	2025/10/13 21:20:32 GCP Auth Webhook started!
	2025/10/13 21:21:02 Ready to marshal response ...
	2025/10/13 21:21:02 Ready to write response ...
	2025/10/13 21:21:02 Ready to marshal response ...
	2025/10/13 21:21:02 Ready to write response ...
	2025/10/13 21:21:02 Ready to marshal response ...
	2025/10/13 21:21:02 Ready to write response ...
	2025/10/13 21:21:15 Ready to marshal response ...
	2025/10/13 21:21:15 Ready to write response ...
	2025/10/13 21:21:20 Ready to marshal response ...
	2025/10/13 21:21:20 Ready to write response ...
	2025/10/13 21:21:23 Ready to marshal response ...
	2025/10/13 21:21:23 Ready to write response ...
	2025/10/13 21:21:23 Ready to marshal response ...
	2025/10/13 21:21:23 Ready to write response ...
	2025/10/13 21:21:31 Ready to marshal response ...
	2025/10/13 21:21:31 Ready to write response ...
	2025/10/13 21:21:35 Ready to marshal response ...
	2025/10/13 21:21:35 Ready to write response ...
	2025/10/13 21:22:00 Ready to marshal response ...
	2025/10/13 21:22:00 Ready to write response ...
	2025/10/13 21:23:38 Ready to marshal response ...
	2025/10/13 21:23:38 Ready to write response ...
	
	
	==> kernel <==
	 21:23:40 up  1:06,  0 user,  load average: 0.36, 17.66, 50.00
	Linux addons-143775 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4a3e089044a38a8fa3d24e8f2669f6febf78e7a168cc30e9e798a894d91d7057] <==
	I1013 21:21:33.666396       1 main.go:301] handling current node
	I1013 21:21:43.667138       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 21:21:43.667173       1 main.go:301] handling current node
	I1013 21:21:53.672113       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 21:21:53.672157       1 main.go:301] handling current node
	I1013 21:22:03.668071       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 21:22:03.668122       1 main.go:301] handling current node
	I1013 21:22:13.667662       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 21:22:13.667699       1 main.go:301] handling current node
	I1013 21:22:23.672953       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 21:22:23.673012       1 main.go:301] handling current node
	I1013 21:22:33.671182       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 21:22:33.671222       1 main.go:301] handling current node
	I1013 21:22:43.667360       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 21:22:43.667410       1 main.go:301] handling current node
	I1013 21:22:53.669204       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 21:22:53.669250       1 main.go:301] handling current node
	I1013 21:23:03.666440       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 21:23:03.666486       1 main.go:301] handling current node
	I1013 21:23:13.667154       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 21:23:13.667188       1 main.go:301] handling current node
	I1013 21:23:23.673175       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 21:23:23.673231       1 main.go:301] handling current node
	I1013 21:23:33.671128       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 21:23:33.671160       1 main.go:301] handling current node
	
	
	==> kube-apiserver [fc72bcf650d5a023cf7f3dc7bac0e28433de3d691982471fd25a7d139667d786] <==
	W1013 21:19:41.593104       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1013 21:19:41.600153       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1013 21:19:54.016822       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.102.115.42:443: connect: connection refused
	E1013 21:19:54.016945       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.102.115.42:443: connect: connection refused" logger="UnhandledError"
	W1013 21:19:54.016965       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.102.115.42:443: connect: connection refused
	E1013 21:19:54.017025       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.102.115.42:443: connect: connection refused" logger="UnhandledError"
	W1013 21:19:54.037647       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.102.115.42:443: connect: connection refused
	E1013 21:19:54.037693       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.102.115.42:443: connect: connection refused" logger="UnhandledError"
	W1013 21:19:54.043334       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.102.115.42:443: connect: connection refused
	E1013 21:19:54.043376       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.102.115.42:443: connect: connection refused" logger="UnhandledError"
	W1013 21:20:11.992630       1 handler_proxy.go:99] no RequestInfo found in the context
	E1013 21:20:11.992655       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.65.187:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.65.187:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.108.65.187:443: connect: connection refused" logger="UnhandledError"
	E1013 21:20:11.992703       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1013 21:20:11.993150       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.65.187:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.65.187:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.108.65.187:443: connect: connection refused" logger="UnhandledError"
	E1013 21:20:11.998319       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.65.187:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.65.187:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.108.65.187:443: connect: connection refused" logger="UnhandledError"
	I1013 21:20:12.048418       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1013 21:21:09.916275       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:51806: use of closed network connection
	E1013 21:21:10.069433       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:51828: use of closed network connection
	I1013 21:21:15.825252       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1013 21:21:16.024671       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.96.125.6"}
	I1013 21:21:45.121516       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1013 21:23:38.923060       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.107.115.17"}
	
	
	==> kube-controller-manager [6cbf217264895ef9824d482611b2de76f8f55105f2f0de78e29b7723da223ae9] <==
	I1013 21:19:11.549475       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1013 21:19:11.549594       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1013 21:19:11.549699       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1013 21:19:11.549717       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1013 21:19:11.549814       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1013 21:19:11.549940       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1013 21:19:11.550066       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1013 21:19:11.552320       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1013 21:19:11.552336       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1013 21:19:11.553200       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1013 21:19:11.553221       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1013 21:19:11.554064       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1013 21:19:11.554067       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1013 21:19:11.556330       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1013 21:19:11.570272       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1013 21:19:41.558763       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1013 21:19:41.558917       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1013 21:19:41.558974       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1013 21:19:41.580056       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1013 21:19:41.585033       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1013 21:19:41.660006       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1013 21:19:41.685522       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 21:19:56.478843       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1013 21:20:11.665562       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1013 21:20:11.692800       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [ac355aa00aaae919e93c529f16cc645b44f261b30a90efb74492d107645c316b] <==
	I1013 21:19:13.390136       1 server_linux.go:53] "Using iptables proxy"
	I1013 21:19:13.601601       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1013 21:19:13.702137       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1013 21:19:13.702180       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1013 21:19:13.702294       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1013 21:19:13.774189       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1013 21:19:13.775067       1 server_linux.go:132] "Using iptables Proxier"
	I1013 21:19:13.784459       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1013 21:19:13.791656       1 server.go:527] "Version info" version="v1.34.1"
	I1013 21:19:13.791850       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 21:19:13.793950       1 config.go:200] "Starting service config controller"
	I1013 21:19:13.796099       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1013 21:19:13.794394       1 config.go:106] "Starting endpoint slice config controller"
	I1013 21:19:13.796194       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1013 21:19:13.794416       1 config.go:403] "Starting serviceCIDR config controller"
	I1013 21:19:13.796242       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1013 21:19:13.795285       1 config.go:309] "Starting node config controller"
	I1013 21:19:13.796290       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1013 21:19:13.796314       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1013 21:19:13.896355       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1013 21:19:13.896364       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1013 21:19:13.896451       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [c0af2973488b6dc3da55356729afdb472a4832a5e5acfe7dee831dc817711363] <==
	I1013 21:19:05.169728       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 21:19:05.171515       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 21:19:05.171550       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 21:19:05.171761       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1013 21:19:05.171816       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1013 21:19:05.174153       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1013 21:19:05.174477       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1013 21:19:05.174496       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1013 21:19:05.174666       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1013 21:19:05.174830       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1013 21:19:05.174981       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1013 21:19:05.175082       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1013 21:19:05.175149       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1013 21:19:05.175404       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1013 21:19:05.175508       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1013 21:19:05.175656       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1013 21:19:05.175510       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1013 21:19:05.175681       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1013 21:19:05.175679       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1013 21:19:05.175887       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1013 21:19:05.175889       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1013 21:19:05.175165       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1013 21:19:05.176062       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1013 21:19:05.176838       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	I1013 21:19:06.571856       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 13 21:22:06 addons-143775 kubelet[1297]: I1013 21:22:06.779619    1297 scope.go:117] "RemoveContainer" containerID="0484e322ef2cec39179660cf9c5f6d3531cbc4a63e44c2771ddf75fda96af1d8"
	Oct 13 21:22:06 addons-143775 kubelet[1297]: I1013 21:22:06.787813    1297 scope.go:117] "RemoveContainer" containerID="a6599dcf419caa68e3a5688ea90af4cf94f008310613a87418bb363104712e36"
	Oct 13 21:22:07 addons-143775 kubelet[1297]: I1013 21:22:07.488937    1297 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"task-pv-storage\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^a73e4851-a87a-11f0-9e31-6a2d18bfbaeb\") pod \"da666582-0497-40af-9328-40a8078f4967\" (UID: \"da666582-0497-40af-9328-40a8078f4967\") "
	Oct 13 21:22:07 addons-143775 kubelet[1297]: I1013 21:22:07.489034    1297 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/da666582-0497-40af-9328-40a8078f4967-gcp-creds\") pod \"da666582-0497-40af-9328-40a8078f4967\" (UID: \"da666582-0497-40af-9328-40a8078f4967\") "
	Oct 13 21:22:07 addons-143775 kubelet[1297]: I1013 21:22:07.489068    1297 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gd8hk\" (UniqueName: \"kubernetes.io/projected/da666582-0497-40af-9328-40a8078f4967-kube-api-access-gd8hk\") pod \"da666582-0497-40af-9328-40a8078f4967\" (UID: \"da666582-0497-40af-9328-40a8078f4967\") "
	Oct 13 21:22:07 addons-143775 kubelet[1297]: I1013 21:22:07.489090    1297 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da666582-0497-40af-9328-40a8078f4967-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "da666582-0497-40af-9328-40a8078f4967" (UID: "da666582-0497-40af-9328-40a8078f4967"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Oct 13 21:22:07 addons-143775 kubelet[1297]: I1013 21:22:07.489250    1297 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/da666582-0497-40af-9328-40a8078f4967-gcp-creds\") on node \"addons-143775\" DevicePath \"\""
	Oct 13 21:22:07 addons-143775 kubelet[1297]: I1013 21:22:07.490928    1297 scope.go:117] "RemoveContainer" containerID="32475c4e546d86a7b2f0719d4f6800f925596fed03159faf9bcde5e5912723b4"
	Oct 13 21:22:07 addons-143775 kubelet[1297]: I1013 21:22:07.491802    1297 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da666582-0497-40af-9328-40a8078f4967-kube-api-access-gd8hk" (OuterVolumeSpecName: "kube-api-access-gd8hk") pod "da666582-0497-40af-9328-40a8078f4967" (UID: "da666582-0497-40af-9328-40a8078f4967"). InnerVolumeSpecName "kube-api-access-gd8hk". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Oct 13 21:22:07 addons-143775 kubelet[1297]: I1013 21:22:07.492467    1297 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/hostpath.csi.k8s.io^a73e4851-a87a-11f0-9e31-6a2d18bfbaeb" (OuterVolumeSpecName: "task-pv-storage") pod "da666582-0497-40af-9328-40a8078f4967" (UID: "da666582-0497-40af-9328-40a8078f4967"). InnerVolumeSpecName "pvc-5a3a91cb-d073-43a4-8f17-be516a74880e". PluginName "kubernetes.io/csi", VolumeGIDValue ""
	Oct 13 21:22:07 addons-143775 kubelet[1297]: I1013 21:22:07.499288    1297 scope.go:117] "RemoveContainer" containerID="32475c4e546d86a7b2f0719d4f6800f925596fed03159faf9bcde5e5912723b4"
	Oct 13 21:22:07 addons-143775 kubelet[1297]: E1013 21:22:07.499646    1297 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32475c4e546d86a7b2f0719d4f6800f925596fed03159faf9bcde5e5912723b4\": container with ID starting with 32475c4e546d86a7b2f0719d4f6800f925596fed03159faf9bcde5e5912723b4 not found: ID does not exist" containerID="32475c4e546d86a7b2f0719d4f6800f925596fed03159faf9bcde5e5912723b4"
	Oct 13 21:22:07 addons-143775 kubelet[1297]: I1013 21:22:07.499684    1297 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32475c4e546d86a7b2f0719d4f6800f925596fed03159faf9bcde5e5912723b4"} err="failed to get container status \"32475c4e546d86a7b2f0719d4f6800f925596fed03159faf9bcde5e5912723b4\": rpc error: code = NotFound desc = could not find container \"32475c4e546d86a7b2f0719d4f6800f925596fed03159faf9bcde5e5912723b4\": container with ID starting with 32475c4e546d86a7b2f0719d4f6800f925596fed03159faf9bcde5e5912723b4 not found: ID does not exist"
	Oct 13 21:22:07 addons-143775 kubelet[1297]: I1013 21:22:07.590635    1297 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-5a3a91cb-d073-43a4-8f17-be516a74880e\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^a73e4851-a87a-11f0-9e31-6a2d18bfbaeb\") on node \"addons-143775\" "
	Oct 13 21:22:07 addons-143775 kubelet[1297]: I1013 21:22:07.590671    1297 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gd8hk\" (UniqueName: \"kubernetes.io/projected/da666582-0497-40af-9328-40a8078f4967-kube-api-access-gd8hk\") on node \"addons-143775\" DevicePath \"\""
	Oct 13 21:22:07 addons-143775 kubelet[1297]: I1013 21:22:07.598583    1297 operation_generator.go:895] UnmountDevice succeeded for volume "pvc-5a3a91cb-d073-43a4-8f17-be516a74880e" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^a73e4851-a87a-11f0-9e31-6a2d18bfbaeb") on node "addons-143775"
	Oct 13 21:22:07 addons-143775 kubelet[1297]: I1013 21:22:07.692047    1297 reconciler_common.go:299] "Volume detached for volume \"pvc-5a3a91cb-d073-43a4-8f17-be516a74880e\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^a73e4851-a87a-11f0-9e31-6a2d18bfbaeb\") on node \"addons-143775\" DevicePath \"\""
	Oct 13 21:22:08 addons-143775 kubelet[1297]: I1013 21:22:08.739254    1297 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da666582-0497-40af-9328-40a8078f4967" path="/var/lib/kubelet/pods/da666582-0497-40af-9328-40a8078f4967/volumes"
	Oct 13 21:22:11 addons-143775 kubelet[1297]: I1013 21:22:11.525237    1297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-creds-764b6fb674-skkk5" podStartSLOduration=177.361112966 podStartE2EDuration="2m58.525213954s" podCreationTimestamp="2025-10-13 21:19:13 +0000 UTC" firstStartedPulling="2025-10-13 21:22:09.760283535 +0000 UTC m=+183.108429808" lastFinishedPulling="2025-10-13 21:22:10.924384508 +0000 UTC m=+184.272530796" observedRunningTime="2025-10-13 21:22:11.524582297 +0000 UTC m=+184.872728591" watchObservedRunningTime="2025-10-13 21:22:11.525213954 +0000 UTC m=+184.873360247"
	Oct 13 21:22:23 addons-143775 kubelet[1297]: I1013 21:22:23.736192    1297 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-ppkwz" secret="" err="secret \"gcp-auth\" not found"
	Oct 13 21:22:37 addons-143775 kubelet[1297]: I1013 21:22:37.736065    1297 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-dncl2" secret="" err="secret \"gcp-auth\" not found"
	Oct 13 21:22:58 addons-143775 kubelet[1297]: I1013 21:22:58.736800    1297 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-rrhdd" secret="" err="secret \"gcp-auth\" not found"
	Oct 13 21:23:38 addons-143775 kubelet[1297]: I1013 21:23:38.736965    1297 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-ppkwz" secret="" err="secret \"gcp-auth\" not found"
	Oct 13 21:23:38 addons-143775 kubelet[1297]: I1013 21:23:38.910320    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ws4tp\" (UniqueName: \"kubernetes.io/projected/c555e683-cb56-4113-8c62-36261f004cfd-kube-api-access-ws4tp\") pod \"hello-world-app-5d498dc89-m4txs\" (UID: \"c555e683-cb56-4113-8c62-36261f004cfd\") " pod="default/hello-world-app-5d498dc89-m4txs"
	Oct 13 21:23:38 addons-143775 kubelet[1297]: I1013 21:23:38.910392    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/c555e683-cb56-4113-8c62-36261f004cfd-gcp-creds\") pod \"hello-world-app-5d498dc89-m4txs\" (UID: \"c555e683-cb56-4113-8c62-36261f004cfd\") " pod="default/hello-world-app-5d498dc89-m4txs"
	
	
	==> storage-provisioner [e208e9862015d533dd0968f46404f89356f7ec7b3132965b94044f2e6a69cf3b] <==
	W1013 21:23:15.539501       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:23:17.542920       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:23:17.548569       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:23:19.551555       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:23:19.555451       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:23:21.558904       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:23:21.562808       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:23:23.565362       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:23:23.569757       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:23:25.574022       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:23:25.578319       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:23:27.581113       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:23:27.584869       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:23:29.587807       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:23:29.593120       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:23:31.596046       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:23:31.600284       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:23:33.603559       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:23:33.607691       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:23:35.611528       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:23:35.616934       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:23:37.620763       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:23:37.625475       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:23:39.629016       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:23:39.633894       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-143775 -n addons-143775
helpers_test.go:269: (dbg) Run:  kubectl --context addons-143775 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-jm9d9 ingress-nginx-admission-patch-nrsqr
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-143775 describe pod ingress-nginx-admission-create-jm9d9 ingress-nginx-admission-patch-nrsqr
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-143775 describe pod ingress-nginx-admission-create-jm9d9 ingress-nginx-admission-patch-nrsqr: exit status 1 (61.196455ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-jm9d9" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-nrsqr" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-143775 describe pod ingress-nginx-admission-create-jm9d9 ingress-nginx-admission-patch-nrsqr: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-143775 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-143775 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (244.095489ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 21:23:41.519061  246997 out.go:360] Setting OutFile to fd 1 ...
	I1013 21:23:41.519365  246997 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:23:41.519376  246997 out.go:374] Setting ErrFile to fd 2...
	I1013 21:23:41.519380  246997 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:23:41.519580  246997 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-226873/.minikube/bin
	I1013 21:23:41.519896  246997 mustload.go:65] Loading cluster: addons-143775
	I1013 21:23:41.520260  246997 config.go:182] Loaded profile config "addons-143775": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:23:41.520282  246997 addons.go:606] checking whether the cluster is paused
	I1013 21:23:41.520364  246997 config.go:182] Loaded profile config "addons-143775": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:23:41.520378  246997 host.go:66] Checking if "addons-143775" exists ...
	I1013 21:23:41.520784  246997 cli_runner.go:164] Run: docker container inspect addons-143775 --format={{.State.Status}}
	I1013 21:23:41.542582  246997 ssh_runner.go:195] Run: systemctl --version
	I1013 21:23:41.542643  246997 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-143775
	I1013 21:23:41.560456  246997 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/addons-143775/id_rsa Username:docker}
	I1013 21:23:41.659844  246997 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 21:23:41.659922  246997 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 21:23:41.690984  246997 cri.go:89] found id: "95cc4a1bf7c6c190d77c5ca695c3d04c0d6d2e1e3d0e18f4626219f353f0775c"
	I1013 21:23:41.691027  246997 cri.go:89] found id: "33180043b49d2660b0b0b600c82306371a56f15be0f76fa12958684f8d911ab7"
	I1013 21:23:41.691033  246997 cri.go:89] found id: "29890b5558c66356cd00456d113ffbcb24b0560b6c7702281cc2b7832a9068d6"
	I1013 21:23:41.691038  246997 cri.go:89] found id: "0f56c52e6564ab264ee594edcb66e9f9db567c3d24471d2a8f79d82a5a385ecb"
	I1013 21:23:41.691042  246997 cri.go:89] found id: "dfd7f05ad90ea3b762daf7d97c4592e5f4cbe1ee5068a1ad9aae0dd44a46e977"
	I1013 21:23:41.691046  246997 cri.go:89] found id: "d3f41f21c86bd23b22b1ab82d1c432fc3df136f2ba776767673d0a1e38e70f57"
	I1013 21:23:41.691049  246997 cri.go:89] found id: "178e4409ca2b654b564cbef10d9087938f99ba1aff31a5af597008f5e505b073"
	I1013 21:23:41.691052  246997 cri.go:89] found id: "8d550cc3998c8b6fec3758bb4e81bf21f3792cdc452eaaf1573264c6d0da9c28"
	I1013 21:23:41.691054  246997 cri.go:89] found id: "57bd7bb06e366a05919fc26428aa0bbcd8e88c8e1503a650860ff4f6a69f0061"
	I1013 21:23:41.691060  246997 cri.go:89] found id: "03f55a19579f67bc53cdbf0555efc903f2df5a19107488ff4da9f693ae3d67be"
	I1013 21:23:41.691062  246997 cri.go:89] found id: "37d832fcb8c1f765f5710ea404d8d3238e6fc7a303954f93298b062481a9391f"
	I1013 21:23:41.691065  246997 cri.go:89] found id: "0316d05383999cb939c985fa5634e71b5f4766c07b29cb7b3f2db7cbd6783337"
	I1013 21:23:41.691067  246997 cri.go:89] found id: "630a251fc66ba47575f7dd7a06f4331d0ef17e4f414acb828ab6faab74a9d57d"
	I1013 21:23:41.691070  246997 cri.go:89] found id: "03c7460cdbd20bb306bb9b6b11e7d73452607a8503a269384f8624ceaf29065e"
	I1013 21:23:41.691073  246997 cri.go:89] found id: "0e9754c3036dfd2b0b62663ec77dd65bc2a44adab66d445bdc945a020f3d0fbc"
	I1013 21:23:41.691085  246997 cri.go:89] found id: "e57df483a324fce39e093dadf731dd3ec5c0ce557b47f472dc708e8af7d2b537"
	I1013 21:23:41.691100  246997 cri.go:89] found id: "a21bb2b294cead5d90e3f5593637bc6716719945f5e23d06cf01617fdee3e75e"
	I1013 21:23:41.691104  246997 cri.go:89] found id: "278b4b7546c8cbb271aa40024385c9a2115953314e4b6fd1f291d9686c2f7374"
	I1013 21:23:41.691107  246997 cri.go:89] found id: "e208e9862015d533dd0968f46404f89356f7ec7b3132965b94044f2e6a69cf3b"
	I1013 21:23:41.691109  246997 cri.go:89] found id: "4a3e089044a38a8fa3d24e8f2669f6febf78e7a168cc30e9e798a894d91d7057"
	I1013 21:23:41.691112  246997 cri.go:89] found id: "ac355aa00aaae919e93c529f16cc645b44f261b30a90efb74492d107645c316b"
	I1013 21:23:41.691114  246997 cri.go:89] found id: "fc72bcf650d5a023cf7f3dc7bac0e28433de3d691982471fd25a7d139667d786"
	I1013 21:23:41.691117  246997 cri.go:89] found id: "4f9c304b23eabe02395b1858ed2ac76623d0b6f4887bb6ee139a97ff4d2ea01e"
	I1013 21:23:41.691119  246997 cri.go:89] found id: "c0af2973488b6dc3da55356729afdb472a4832a5e5acfe7dee831dc817711363"
	I1013 21:23:41.691121  246997 cri.go:89] found id: "6cbf217264895ef9824d482611b2de76f8f55105f2f0de78e29b7723da223ae9"
	I1013 21:23:41.691123  246997 cri.go:89] found id: ""
	I1013 21:23:41.691164  246997 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 21:23:41.706918  246997 out.go:203] 
	W1013 21:23:41.708204  246997 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T21:23:41Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T21:23:41Z" level=error msg="open /run/runc: no such file or directory"
	
	W1013 21:23:41.708224  246997 out.go:285] * 
	* 
	W1013 21:23:41.711395  246997 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1013 21:23:41.712587  246997 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-amd64 -p addons-143775 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-143775 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-143775 addons disable ingress --alsologtostderr -v=1: exit status 11 (237.253628ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 21:23:41.765483  247062 out.go:360] Setting OutFile to fd 1 ...
	I1013 21:23:41.765784  247062 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:23:41.765795  247062 out.go:374] Setting ErrFile to fd 2...
	I1013 21:23:41.765801  247062 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:23:41.766039  247062 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-226873/.minikube/bin
	I1013 21:23:41.766355  247062 mustload.go:65] Loading cluster: addons-143775
	I1013 21:23:41.766736  247062 config.go:182] Loaded profile config "addons-143775": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:23:41.766756  247062 addons.go:606] checking whether the cluster is paused
	I1013 21:23:41.766852  247062 config.go:182] Loaded profile config "addons-143775": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:23:41.766869  247062 host.go:66] Checking if "addons-143775" exists ...
	I1013 21:23:41.767353  247062 cli_runner.go:164] Run: docker container inspect addons-143775 --format={{.State.Status}}
	I1013 21:23:41.784903  247062 ssh_runner.go:195] Run: systemctl --version
	I1013 21:23:41.784966  247062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-143775
	I1013 21:23:41.802605  247062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/addons-143775/id_rsa Username:docker}
	I1013 21:23:41.899803  247062 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 21:23:41.899872  247062 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 21:23:41.929600  247062 cri.go:89] found id: "95cc4a1bf7c6c190d77c5ca695c3d04c0d6d2e1e3d0e18f4626219f353f0775c"
	I1013 21:23:41.929638  247062 cri.go:89] found id: "33180043b49d2660b0b0b600c82306371a56f15be0f76fa12958684f8d911ab7"
	I1013 21:23:41.929645  247062 cri.go:89] found id: "29890b5558c66356cd00456d113ffbcb24b0560b6c7702281cc2b7832a9068d6"
	I1013 21:23:41.929651  247062 cri.go:89] found id: "0f56c52e6564ab264ee594edcb66e9f9db567c3d24471d2a8f79d82a5a385ecb"
	I1013 21:23:41.929656  247062 cri.go:89] found id: "dfd7f05ad90ea3b762daf7d97c4592e5f4cbe1ee5068a1ad9aae0dd44a46e977"
	I1013 21:23:41.929662  247062 cri.go:89] found id: "d3f41f21c86bd23b22b1ab82d1c432fc3df136f2ba776767673d0a1e38e70f57"
	I1013 21:23:41.929667  247062 cri.go:89] found id: "178e4409ca2b654b564cbef10d9087938f99ba1aff31a5af597008f5e505b073"
	I1013 21:23:41.929671  247062 cri.go:89] found id: "8d550cc3998c8b6fec3758bb4e81bf21f3792cdc452eaaf1573264c6d0da9c28"
	I1013 21:23:41.929676  247062 cri.go:89] found id: "57bd7bb06e366a05919fc26428aa0bbcd8e88c8e1503a650860ff4f6a69f0061"
	I1013 21:23:41.929689  247062 cri.go:89] found id: "03f55a19579f67bc53cdbf0555efc903f2df5a19107488ff4da9f693ae3d67be"
	I1013 21:23:41.929695  247062 cri.go:89] found id: "37d832fcb8c1f765f5710ea404d8d3238e6fc7a303954f93298b062481a9391f"
	I1013 21:23:41.929697  247062 cri.go:89] found id: "0316d05383999cb939c985fa5634e71b5f4766c07b29cb7b3f2db7cbd6783337"
	I1013 21:23:41.929700  247062 cri.go:89] found id: "630a251fc66ba47575f7dd7a06f4331d0ef17e4f414acb828ab6faab74a9d57d"
	I1013 21:23:41.929702  247062 cri.go:89] found id: "03c7460cdbd20bb306bb9b6b11e7d73452607a8503a269384f8624ceaf29065e"
	I1013 21:23:41.929704  247062 cri.go:89] found id: "0e9754c3036dfd2b0b62663ec77dd65bc2a44adab66d445bdc945a020f3d0fbc"
	I1013 21:23:41.929717  247062 cri.go:89] found id: "e57df483a324fce39e093dadf731dd3ec5c0ce557b47f472dc708e8af7d2b537"
	I1013 21:23:41.929726  247062 cri.go:89] found id: "a21bb2b294cead5d90e3f5593637bc6716719945f5e23d06cf01617fdee3e75e"
	I1013 21:23:41.929731  247062 cri.go:89] found id: "278b4b7546c8cbb271aa40024385c9a2115953314e4b6fd1f291d9686c2f7374"
	I1013 21:23:41.929733  247062 cri.go:89] found id: "e208e9862015d533dd0968f46404f89356f7ec7b3132965b94044f2e6a69cf3b"
	I1013 21:23:41.929735  247062 cri.go:89] found id: "4a3e089044a38a8fa3d24e8f2669f6febf78e7a168cc30e9e798a894d91d7057"
	I1013 21:23:41.929738  247062 cri.go:89] found id: "ac355aa00aaae919e93c529f16cc645b44f261b30a90efb74492d107645c316b"
	I1013 21:23:41.929740  247062 cri.go:89] found id: "fc72bcf650d5a023cf7f3dc7bac0e28433de3d691982471fd25a7d139667d786"
	I1013 21:23:41.929743  247062 cri.go:89] found id: "4f9c304b23eabe02395b1858ed2ac76623d0b6f4887bb6ee139a97ff4d2ea01e"
	I1013 21:23:41.929745  247062 cri.go:89] found id: "c0af2973488b6dc3da55356729afdb472a4832a5e5acfe7dee831dc817711363"
	I1013 21:23:41.929758  247062 cri.go:89] found id: "6cbf217264895ef9824d482611b2de76f8f55105f2f0de78e29b7723da223ae9"
	I1013 21:23:41.929761  247062 cri.go:89] found id: ""
	I1013 21:23:41.929815  247062 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 21:23:41.944194  247062 out.go:203] 
	W1013 21:23:41.945624  247062 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T21:23:41Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T21:23:41Z" level=error msg="open /run/runc: no such file or directory"
	
	W1013 21:23:41.945645  247062 out.go:285] * 
	* 
	W1013 21:23:41.948787  247062 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1013 21:23:41.950320  247062 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-amd64 -p addons-143775 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (146.39s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.28s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-lkcrw" [61d0f5a2-131b-4102-8f29-a340a5e0d003] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004440686s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-143775 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-143775 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (271.282846ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 21:21:17.962369  242383 out.go:360] Setting OutFile to fd 1 ...
	I1013 21:21:17.962687  242383 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:21:17.962701  242383 out.go:374] Setting ErrFile to fd 2...
	I1013 21:21:17.962708  242383 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:21:17.963032  242383 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-226873/.minikube/bin
	I1013 21:21:17.963383  242383 mustload.go:65] Loading cluster: addons-143775
	I1013 21:21:17.963846  242383 config.go:182] Loaded profile config "addons-143775": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:21:17.963869  242383 addons.go:606] checking whether the cluster is paused
	I1013 21:21:17.964012  242383 config.go:182] Loaded profile config "addons-143775": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:21:17.964033  242383 host.go:66] Checking if "addons-143775" exists ...
	I1013 21:21:17.964543  242383 cli_runner.go:164] Run: docker container inspect addons-143775 --format={{.State.Status}}
	I1013 21:21:17.985648  242383 ssh_runner.go:195] Run: systemctl --version
	I1013 21:21:17.985723  242383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-143775
	I1013 21:21:18.007526  242383 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/addons-143775/id_rsa Username:docker}
	I1013 21:21:18.114664  242383 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 21:21:18.114770  242383 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 21:21:18.147836  242383 cri.go:89] found id: "33180043b49d2660b0b0b600c82306371a56f15be0f76fa12958684f8d911ab7"
	I1013 21:21:18.147864  242383 cri.go:89] found id: "29890b5558c66356cd00456d113ffbcb24b0560b6c7702281cc2b7832a9068d6"
	I1013 21:21:18.147867  242383 cri.go:89] found id: "0f56c52e6564ab264ee594edcb66e9f9db567c3d24471d2a8f79d82a5a385ecb"
	I1013 21:21:18.147885  242383 cri.go:89] found id: "dfd7f05ad90ea3b762daf7d97c4592e5f4cbe1ee5068a1ad9aae0dd44a46e977"
	I1013 21:21:18.147894  242383 cri.go:89] found id: "d3f41f21c86bd23b22b1ab82d1c432fc3df136f2ba776767673d0a1e38e70f57"
	I1013 21:21:18.147897  242383 cri.go:89] found id: "178e4409ca2b654b564cbef10d9087938f99ba1aff31a5af597008f5e505b073"
	I1013 21:21:18.147900  242383 cri.go:89] found id: "8d550cc3998c8b6fec3758bb4e81bf21f3792cdc452eaaf1573264c6d0da9c28"
	I1013 21:21:18.147902  242383 cri.go:89] found id: "57bd7bb06e366a05919fc26428aa0bbcd8e88c8e1503a650860ff4f6a69f0061"
	I1013 21:21:18.147906  242383 cri.go:89] found id: "03f55a19579f67bc53cdbf0555efc903f2df5a19107488ff4da9f693ae3d67be"
	I1013 21:21:18.147913  242383 cri.go:89] found id: "37d832fcb8c1f765f5710ea404d8d3238e6fc7a303954f93298b062481a9391f"
	I1013 21:21:18.147916  242383 cri.go:89] found id: "0316d05383999cb939c985fa5634e71b5f4766c07b29cb7b3f2db7cbd6783337"
	I1013 21:21:18.147920  242383 cri.go:89] found id: "630a251fc66ba47575f7dd7a06f4331d0ef17e4f414acb828ab6faab74a9d57d"
	I1013 21:21:18.147924  242383 cri.go:89] found id: "03c7460cdbd20bb306bb9b6b11e7d73452607a8503a269384f8624ceaf29065e"
	I1013 21:21:18.147927  242383 cri.go:89] found id: "0e9754c3036dfd2b0b62663ec77dd65bc2a44adab66d445bdc945a020f3d0fbc"
	I1013 21:21:18.147931  242383 cri.go:89] found id: "e57df483a324fce39e093dadf731dd3ec5c0ce557b47f472dc708e8af7d2b537"
	I1013 21:21:18.147937  242383 cri.go:89] found id: "a21bb2b294cead5d90e3f5593637bc6716719945f5e23d06cf01617fdee3e75e"
	I1013 21:21:18.147941  242383 cri.go:89] found id: "278b4b7546c8cbb271aa40024385c9a2115953314e4b6fd1f291d9686c2f7374"
	I1013 21:21:18.147945  242383 cri.go:89] found id: "e208e9862015d533dd0968f46404f89356f7ec7b3132965b94044f2e6a69cf3b"
	I1013 21:21:18.147949  242383 cri.go:89] found id: "4a3e089044a38a8fa3d24e8f2669f6febf78e7a168cc30e9e798a894d91d7057"
	I1013 21:21:18.147953  242383 cri.go:89] found id: "ac355aa00aaae919e93c529f16cc645b44f261b30a90efb74492d107645c316b"
	I1013 21:21:18.147956  242383 cri.go:89] found id: "fc72bcf650d5a023cf7f3dc7bac0e28433de3d691982471fd25a7d139667d786"
	I1013 21:21:18.147961  242383 cri.go:89] found id: "4f9c304b23eabe02395b1858ed2ac76623d0b6f4887bb6ee139a97ff4d2ea01e"
	I1013 21:21:18.147964  242383 cri.go:89] found id: "c0af2973488b6dc3da55356729afdb472a4832a5e5acfe7dee831dc817711363"
	I1013 21:21:18.147968  242383 cri.go:89] found id: "6cbf217264895ef9824d482611b2de76f8f55105f2f0de78e29b7723da223ae9"
	I1013 21:21:18.147971  242383 cri.go:89] found id: ""
	I1013 21:21:18.148043  242383 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 21:21:18.165081  242383 out.go:203] 
	W1013 21:21:18.166514  242383 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T21:21:18Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T21:21:18Z" level=error msg="open /run/runc: no such file or directory"
	
	W1013 21:21:18.166538  242383 out.go:285] * 
	* 
	W1013 21:21:18.169659  242383 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1013 21:21:18.171332  242383 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-amd64 -p addons-143775 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (5.28s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.32s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 2.927993ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-vdzpz" [cbad5626-3368-443c-8b1f-db21133a333c] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.002905821s
addons_test.go:463: (dbg) Run:  kubectl --context addons-143775 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-143775 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-143775 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (246.502035ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 21:21:15.434185  241713 out.go:360] Setting OutFile to fd 1 ...
	I1013 21:21:15.434526  241713 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:21:15.434537  241713 out.go:374] Setting ErrFile to fd 2...
	I1013 21:21:15.434544  241713 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:21:15.434752  241713 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-226873/.minikube/bin
	I1013 21:21:15.435094  241713 mustload.go:65] Loading cluster: addons-143775
	I1013 21:21:15.435481  241713 config.go:182] Loaded profile config "addons-143775": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:21:15.435502  241713 addons.go:606] checking whether the cluster is paused
	I1013 21:21:15.435602  241713 config.go:182] Loaded profile config "addons-143775": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:21:15.435619  241713 host.go:66] Checking if "addons-143775" exists ...
	I1013 21:21:15.436034  241713 cli_runner.go:164] Run: docker container inspect addons-143775 --format={{.State.Status}}
	I1013 21:21:15.454300  241713 ssh_runner.go:195] Run: systemctl --version
	I1013 21:21:15.454375  241713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-143775
	I1013 21:21:15.471861  241713 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/addons-143775/id_rsa Username:docker}
	I1013 21:21:15.569570  241713 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 21:21:15.569658  241713 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 21:21:15.602460  241713 cri.go:89] found id: "33180043b49d2660b0b0b600c82306371a56f15be0f76fa12958684f8d911ab7"
	I1013 21:21:15.602485  241713 cri.go:89] found id: "29890b5558c66356cd00456d113ffbcb24b0560b6c7702281cc2b7832a9068d6"
	I1013 21:21:15.602490  241713 cri.go:89] found id: "0f56c52e6564ab264ee594edcb66e9f9db567c3d24471d2a8f79d82a5a385ecb"
	I1013 21:21:15.602494  241713 cri.go:89] found id: "dfd7f05ad90ea3b762daf7d97c4592e5f4cbe1ee5068a1ad9aae0dd44a46e977"
	I1013 21:21:15.602497  241713 cri.go:89] found id: "d3f41f21c86bd23b22b1ab82d1c432fc3df136f2ba776767673d0a1e38e70f57"
	I1013 21:21:15.602502  241713 cri.go:89] found id: "178e4409ca2b654b564cbef10d9087938f99ba1aff31a5af597008f5e505b073"
	I1013 21:21:15.602506  241713 cri.go:89] found id: "8d550cc3998c8b6fec3758bb4e81bf21f3792cdc452eaaf1573264c6d0da9c28"
	I1013 21:21:15.602510  241713 cri.go:89] found id: "57bd7bb06e366a05919fc26428aa0bbcd8e88c8e1503a650860ff4f6a69f0061"
	I1013 21:21:15.602513  241713 cri.go:89] found id: "03f55a19579f67bc53cdbf0555efc903f2df5a19107488ff4da9f693ae3d67be"
	I1013 21:21:15.602530  241713 cri.go:89] found id: "37d832fcb8c1f765f5710ea404d8d3238e6fc7a303954f93298b062481a9391f"
	I1013 21:21:15.602535  241713 cri.go:89] found id: "0316d05383999cb939c985fa5634e71b5f4766c07b29cb7b3f2db7cbd6783337"
	I1013 21:21:15.602539  241713 cri.go:89] found id: "630a251fc66ba47575f7dd7a06f4331d0ef17e4f414acb828ab6faab74a9d57d"
	I1013 21:21:15.602546  241713 cri.go:89] found id: "03c7460cdbd20bb306bb9b6b11e7d73452607a8503a269384f8624ceaf29065e"
	I1013 21:21:15.602551  241713 cri.go:89] found id: "0e9754c3036dfd2b0b62663ec77dd65bc2a44adab66d445bdc945a020f3d0fbc"
	I1013 21:21:15.602559  241713 cri.go:89] found id: "e57df483a324fce39e093dadf731dd3ec5c0ce557b47f472dc708e8af7d2b537"
	I1013 21:21:15.602570  241713 cri.go:89] found id: "a21bb2b294cead5d90e3f5593637bc6716719945f5e23d06cf01617fdee3e75e"
	I1013 21:21:15.602578  241713 cri.go:89] found id: "278b4b7546c8cbb271aa40024385c9a2115953314e4b6fd1f291d9686c2f7374"
	I1013 21:21:15.602583  241713 cri.go:89] found id: "e208e9862015d533dd0968f46404f89356f7ec7b3132965b94044f2e6a69cf3b"
	I1013 21:21:15.602587  241713 cri.go:89] found id: "4a3e089044a38a8fa3d24e8f2669f6febf78e7a168cc30e9e798a894d91d7057"
	I1013 21:21:15.602591  241713 cri.go:89] found id: "ac355aa00aaae919e93c529f16cc645b44f261b30a90efb74492d107645c316b"
	I1013 21:21:15.602595  241713 cri.go:89] found id: "fc72bcf650d5a023cf7f3dc7bac0e28433de3d691982471fd25a7d139667d786"
	I1013 21:21:15.602600  241713 cri.go:89] found id: "4f9c304b23eabe02395b1858ed2ac76623d0b6f4887bb6ee139a97ff4d2ea01e"
	I1013 21:21:15.602608  241713 cri.go:89] found id: "c0af2973488b6dc3da55356729afdb472a4832a5e5acfe7dee831dc817711363"
	I1013 21:21:15.602612  241713 cri.go:89] found id: "6cbf217264895ef9824d482611b2de76f8f55105f2f0de78e29b7723da223ae9"
	I1013 21:21:15.602617  241713 cri.go:89] found id: ""
	I1013 21:21:15.602664  241713 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 21:21:15.618521  241713 out.go:203] 
	W1013 21:21:15.620205  241713 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T21:21:15Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T21:21:15Z" level=error msg="open /run/runc: no such file or directory"
	
	W1013 21:21:15.620235  241713 out.go:285] * 
	* 
	W1013 21:21:15.624143  241713 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1013 21:21:15.625598  241713 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-amd64 -p addons-143775 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.32s)

                                                
                                    
x
+
TestAddons/parallel/CSI (44.86s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1013 21:21:23.759096  230929 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1013 21:21:23.762607  230929 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1013 21:21:23.762625  230929 kapi.go:107] duration metric: took 3.582748ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 3.592134ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-143775 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-143775 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-143775 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-143775 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-143775 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-143775 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-143775 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-143775 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-143775 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-143775 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-143775 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-143775 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-143775 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-143775 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [30d25837-56d9-4cd9-938b-a80062856b38] Pending
helpers_test.go:352: "task-pv-pod" [30d25837-56d9-4cd9-938b-a80062856b38] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [30d25837-56d9-4cd9-938b-a80062856b38] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 10.004296261s
addons_test.go:572: (dbg) Run:  kubectl --context addons-143775 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-143775 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-143775 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-143775 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-143775 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-143775 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-143775 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-143775 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-143775 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-143775 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-143775 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-143775 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-143775 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-143775 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-143775 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-143775 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-143775 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-143775 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-143775 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-143775 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-143775 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [da666582-0497-40af-9328-40a8078f4967] Pending
helpers_test.go:352: "task-pv-pod-restore" [da666582-0497-40af-9328-40a8078f4967] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [da666582-0497-40af-9328-40a8078f4967] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003732828s
addons_test.go:614: (dbg) Run:  kubectl --context addons-143775 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-143775 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-143775 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-143775 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-143775 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (241.54346ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 21:22:08.187712  244876 out.go:360] Setting OutFile to fd 1 ...
	I1013 21:22:08.188071  244876 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:22:08.188083  244876 out.go:374] Setting ErrFile to fd 2...
	I1013 21:22:08.188090  244876 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:22:08.188316  244876 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-226873/.minikube/bin
	I1013 21:22:08.188611  244876 mustload.go:65] Loading cluster: addons-143775
	I1013 21:22:08.189712  244876 config.go:182] Loaded profile config "addons-143775": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:22:08.189778  244876 addons.go:606] checking whether the cluster is paused
	I1013 21:22:08.190209  244876 config.go:182] Loaded profile config "addons-143775": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:22:08.190247  244876 host.go:66] Checking if "addons-143775" exists ...
	I1013 21:22:08.190786  244876 cli_runner.go:164] Run: docker container inspect addons-143775 --format={{.State.Status}}
	I1013 21:22:08.208425  244876 ssh_runner.go:195] Run: systemctl --version
	I1013 21:22:08.208478  244876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-143775
	I1013 21:22:08.225704  244876 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/addons-143775/id_rsa Username:docker}
	I1013 21:22:08.325001  244876 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 21:22:08.325099  244876 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 21:22:08.356276  244876 cri.go:89] found id: "33180043b49d2660b0b0b600c82306371a56f15be0f76fa12958684f8d911ab7"
	I1013 21:22:08.356299  244876 cri.go:89] found id: "29890b5558c66356cd00456d113ffbcb24b0560b6c7702281cc2b7832a9068d6"
	I1013 21:22:08.356303  244876 cri.go:89] found id: "0f56c52e6564ab264ee594edcb66e9f9db567c3d24471d2a8f79d82a5a385ecb"
	I1013 21:22:08.356306  244876 cri.go:89] found id: "dfd7f05ad90ea3b762daf7d97c4592e5f4cbe1ee5068a1ad9aae0dd44a46e977"
	I1013 21:22:08.356308  244876 cri.go:89] found id: "d3f41f21c86bd23b22b1ab82d1c432fc3df136f2ba776767673d0a1e38e70f57"
	I1013 21:22:08.356312  244876 cri.go:89] found id: "178e4409ca2b654b564cbef10d9087938f99ba1aff31a5af597008f5e505b073"
	I1013 21:22:08.356316  244876 cri.go:89] found id: "8d550cc3998c8b6fec3758bb4e81bf21f3792cdc452eaaf1573264c6d0da9c28"
	I1013 21:22:08.356324  244876 cri.go:89] found id: "57bd7bb06e366a05919fc26428aa0bbcd8e88c8e1503a650860ff4f6a69f0061"
	I1013 21:22:08.356328  244876 cri.go:89] found id: "03f55a19579f67bc53cdbf0555efc903f2df5a19107488ff4da9f693ae3d67be"
	I1013 21:22:08.356337  244876 cri.go:89] found id: "37d832fcb8c1f765f5710ea404d8d3238e6fc7a303954f93298b062481a9391f"
	I1013 21:22:08.356341  244876 cri.go:89] found id: "0316d05383999cb939c985fa5634e71b5f4766c07b29cb7b3f2db7cbd6783337"
	I1013 21:22:08.356345  244876 cri.go:89] found id: "630a251fc66ba47575f7dd7a06f4331d0ef17e4f414acb828ab6faab74a9d57d"
	I1013 21:22:08.356349  244876 cri.go:89] found id: "03c7460cdbd20bb306bb9b6b11e7d73452607a8503a269384f8624ceaf29065e"
	I1013 21:22:08.356353  244876 cri.go:89] found id: "0e9754c3036dfd2b0b62663ec77dd65bc2a44adab66d445bdc945a020f3d0fbc"
	I1013 21:22:08.356361  244876 cri.go:89] found id: "e57df483a324fce39e093dadf731dd3ec5c0ce557b47f472dc708e8af7d2b537"
	I1013 21:22:08.356368  244876 cri.go:89] found id: "a21bb2b294cead5d90e3f5593637bc6716719945f5e23d06cf01617fdee3e75e"
	I1013 21:22:08.356375  244876 cri.go:89] found id: "278b4b7546c8cbb271aa40024385c9a2115953314e4b6fd1f291d9686c2f7374"
	I1013 21:22:08.356380  244876 cri.go:89] found id: "e208e9862015d533dd0968f46404f89356f7ec7b3132965b94044f2e6a69cf3b"
	I1013 21:22:08.356383  244876 cri.go:89] found id: "4a3e089044a38a8fa3d24e8f2669f6febf78e7a168cc30e9e798a894d91d7057"
	I1013 21:22:08.356385  244876 cri.go:89] found id: "ac355aa00aaae919e93c529f16cc645b44f261b30a90efb74492d107645c316b"
	I1013 21:22:08.356398  244876 cri.go:89] found id: "fc72bcf650d5a023cf7f3dc7bac0e28433de3d691982471fd25a7d139667d786"
	I1013 21:22:08.356402  244876 cri.go:89] found id: "4f9c304b23eabe02395b1858ed2ac76623d0b6f4887bb6ee139a97ff4d2ea01e"
	I1013 21:22:08.356405  244876 cri.go:89] found id: "c0af2973488b6dc3da55356729afdb472a4832a5e5acfe7dee831dc817711363"
	I1013 21:22:08.356410  244876 cri.go:89] found id: "6cbf217264895ef9824d482611b2de76f8f55105f2f0de78e29b7723da223ae9"
	I1013 21:22:08.356418  244876 cri.go:89] found id: ""
	I1013 21:22:08.356466  244876 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 21:22:08.371301  244876 out.go:203] 
	W1013 21:22:08.372755  244876 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T21:22:08Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T21:22:08Z" level=error msg="open /run/runc: no such file or directory"
	
	W1013 21:22:08.372774  244876 out.go:285] * 
	* 
	W1013 21:22:08.375928  244876 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1013 21:22:08.377421  244876 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-amd64 -p addons-143775 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-143775 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-143775 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (238.273618ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 21:22:08.429694  244952 out.go:360] Setting OutFile to fd 1 ...
	I1013 21:22:08.430019  244952 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:22:08.430029  244952 out.go:374] Setting ErrFile to fd 2...
	I1013 21:22:08.430039  244952 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:22:08.430231  244952 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-226873/.minikube/bin
	I1013 21:22:08.430503  244952 mustload.go:65] Loading cluster: addons-143775
	I1013 21:22:08.430847  244952 config.go:182] Loaded profile config "addons-143775": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:22:08.430861  244952 addons.go:606] checking whether the cluster is paused
	I1013 21:22:08.430943  244952 config.go:182] Loaded profile config "addons-143775": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:22:08.430955  244952 host.go:66] Checking if "addons-143775" exists ...
	I1013 21:22:08.431394  244952 cli_runner.go:164] Run: docker container inspect addons-143775 --format={{.State.Status}}
	I1013 21:22:08.449218  244952 ssh_runner.go:195] Run: systemctl --version
	I1013 21:22:08.449282  244952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-143775
	I1013 21:22:08.466470  244952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/addons-143775/id_rsa Username:docker}
	I1013 21:22:08.563942  244952 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 21:22:08.564048  244952 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 21:22:08.595395  244952 cri.go:89] found id: "33180043b49d2660b0b0b600c82306371a56f15be0f76fa12958684f8d911ab7"
	I1013 21:22:08.595419  244952 cri.go:89] found id: "29890b5558c66356cd00456d113ffbcb24b0560b6c7702281cc2b7832a9068d6"
	I1013 21:22:08.595423  244952 cri.go:89] found id: "0f56c52e6564ab264ee594edcb66e9f9db567c3d24471d2a8f79d82a5a385ecb"
	I1013 21:22:08.595425  244952 cri.go:89] found id: "dfd7f05ad90ea3b762daf7d97c4592e5f4cbe1ee5068a1ad9aae0dd44a46e977"
	I1013 21:22:08.595438  244952 cri.go:89] found id: "d3f41f21c86bd23b22b1ab82d1c432fc3df136f2ba776767673d0a1e38e70f57"
	I1013 21:22:08.595443  244952 cri.go:89] found id: "178e4409ca2b654b564cbef10d9087938f99ba1aff31a5af597008f5e505b073"
	I1013 21:22:08.595445  244952 cri.go:89] found id: "8d550cc3998c8b6fec3758bb4e81bf21f3792cdc452eaaf1573264c6d0da9c28"
	I1013 21:22:08.595448  244952 cri.go:89] found id: "57bd7bb06e366a05919fc26428aa0bbcd8e88c8e1503a650860ff4f6a69f0061"
	I1013 21:22:08.595450  244952 cri.go:89] found id: "03f55a19579f67bc53cdbf0555efc903f2df5a19107488ff4da9f693ae3d67be"
	I1013 21:22:08.595456  244952 cri.go:89] found id: "37d832fcb8c1f765f5710ea404d8d3238e6fc7a303954f93298b062481a9391f"
	I1013 21:22:08.595459  244952 cri.go:89] found id: "0316d05383999cb939c985fa5634e71b5f4766c07b29cb7b3f2db7cbd6783337"
	I1013 21:22:08.595462  244952 cri.go:89] found id: "630a251fc66ba47575f7dd7a06f4331d0ef17e4f414acb828ab6faab74a9d57d"
	I1013 21:22:08.595464  244952 cri.go:89] found id: "03c7460cdbd20bb306bb9b6b11e7d73452607a8503a269384f8624ceaf29065e"
	I1013 21:22:08.595467  244952 cri.go:89] found id: "0e9754c3036dfd2b0b62663ec77dd65bc2a44adab66d445bdc945a020f3d0fbc"
	I1013 21:22:08.595470  244952 cri.go:89] found id: "e57df483a324fce39e093dadf731dd3ec5c0ce557b47f472dc708e8af7d2b537"
	I1013 21:22:08.595473  244952 cri.go:89] found id: "a21bb2b294cead5d90e3f5593637bc6716719945f5e23d06cf01617fdee3e75e"
	I1013 21:22:08.595482  244952 cri.go:89] found id: "278b4b7546c8cbb271aa40024385c9a2115953314e4b6fd1f291d9686c2f7374"
	I1013 21:22:08.595485  244952 cri.go:89] found id: "e208e9862015d533dd0968f46404f89356f7ec7b3132965b94044f2e6a69cf3b"
	I1013 21:22:08.595488  244952 cri.go:89] found id: "4a3e089044a38a8fa3d24e8f2669f6febf78e7a168cc30e9e798a894d91d7057"
	I1013 21:22:08.595490  244952 cri.go:89] found id: "ac355aa00aaae919e93c529f16cc645b44f261b30a90efb74492d107645c316b"
	I1013 21:22:08.595492  244952 cri.go:89] found id: "fc72bcf650d5a023cf7f3dc7bac0e28433de3d691982471fd25a7d139667d786"
	I1013 21:22:08.595495  244952 cri.go:89] found id: "4f9c304b23eabe02395b1858ed2ac76623d0b6f4887bb6ee139a97ff4d2ea01e"
	I1013 21:22:08.595497  244952 cri.go:89] found id: "c0af2973488b6dc3da55356729afdb472a4832a5e5acfe7dee831dc817711363"
	I1013 21:22:08.595499  244952 cri.go:89] found id: "6cbf217264895ef9824d482611b2de76f8f55105f2f0de78e29b7723da223ae9"
	I1013 21:22:08.595502  244952 cri.go:89] found id: ""
	I1013 21:22:08.595538  244952 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 21:22:08.609947  244952 out.go:203] 
	W1013 21:22:08.611296  244952 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T21:22:08Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T21:22:08Z" level=error msg="open /run/runc: no such file or directory"
	
	W1013 21:22:08.611335  244952 out.go:285] * 
	* 
	W1013 21:22:08.614854  244952 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1013 21:22:08.616231  244952 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-amd64 -p addons-143775 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (44.86s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (2.59s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-143775 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable headlamp -p addons-143775 --alsologtostderr -v=1: exit status 11 (236.467596ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 21:21:10.357612  240863 out.go:360] Setting OutFile to fd 1 ...
	I1013 21:21:10.357917  240863 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:21:10.357929  240863 out.go:374] Setting ErrFile to fd 2...
	I1013 21:21:10.357935  240863 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:21:10.358212  240863 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-226873/.minikube/bin
	I1013 21:21:10.358523  240863 mustload.go:65] Loading cluster: addons-143775
	I1013 21:21:10.358894  240863 config.go:182] Loaded profile config "addons-143775": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:21:10.358915  240863 addons.go:606] checking whether the cluster is paused
	I1013 21:21:10.359035  240863 config.go:182] Loaded profile config "addons-143775": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:21:10.359054  240863 host.go:66] Checking if "addons-143775" exists ...
	I1013 21:21:10.359587  240863 cli_runner.go:164] Run: docker container inspect addons-143775 --format={{.State.Status}}
	I1013 21:21:10.377472  240863 ssh_runner.go:195] Run: systemctl --version
	I1013 21:21:10.377524  240863 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-143775
	I1013 21:21:10.394646  240863 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/addons-143775/id_rsa Username:docker}
	I1013 21:21:10.490926  240863 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 21:21:10.491012  240863 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 21:21:10.520907  240863 cri.go:89] found id: "33180043b49d2660b0b0b600c82306371a56f15be0f76fa12958684f8d911ab7"
	I1013 21:21:10.520928  240863 cri.go:89] found id: "29890b5558c66356cd00456d113ffbcb24b0560b6c7702281cc2b7832a9068d6"
	I1013 21:21:10.520932  240863 cri.go:89] found id: "0f56c52e6564ab264ee594edcb66e9f9db567c3d24471d2a8f79d82a5a385ecb"
	I1013 21:21:10.520935  240863 cri.go:89] found id: "dfd7f05ad90ea3b762daf7d97c4592e5f4cbe1ee5068a1ad9aae0dd44a46e977"
	I1013 21:21:10.520938  240863 cri.go:89] found id: "d3f41f21c86bd23b22b1ab82d1c432fc3df136f2ba776767673d0a1e38e70f57"
	I1013 21:21:10.520942  240863 cri.go:89] found id: "178e4409ca2b654b564cbef10d9087938f99ba1aff31a5af597008f5e505b073"
	I1013 21:21:10.520944  240863 cri.go:89] found id: "8d550cc3998c8b6fec3758bb4e81bf21f3792cdc452eaaf1573264c6d0da9c28"
	I1013 21:21:10.520946  240863 cri.go:89] found id: "57bd7bb06e366a05919fc26428aa0bbcd8e88c8e1503a650860ff4f6a69f0061"
	I1013 21:21:10.520949  240863 cri.go:89] found id: "03f55a19579f67bc53cdbf0555efc903f2df5a19107488ff4da9f693ae3d67be"
	I1013 21:21:10.520956  240863 cri.go:89] found id: "37d832fcb8c1f765f5710ea404d8d3238e6fc7a303954f93298b062481a9391f"
	I1013 21:21:10.520959  240863 cri.go:89] found id: "0316d05383999cb939c985fa5634e71b5f4766c07b29cb7b3f2db7cbd6783337"
	I1013 21:21:10.520961  240863 cri.go:89] found id: "630a251fc66ba47575f7dd7a06f4331d0ef17e4f414acb828ab6faab74a9d57d"
	I1013 21:21:10.520963  240863 cri.go:89] found id: "03c7460cdbd20bb306bb9b6b11e7d73452607a8503a269384f8624ceaf29065e"
	I1013 21:21:10.520966  240863 cri.go:89] found id: "0e9754c3036dfd2b0b62663ec77dd65bc2a44adab66d445bdc945a020f3d0fbc"
	I1013 21:21:10.520968  240863 cri.go:89] found id: "e57df483a324fce39e093dadf731dd3ec5c0ce557b47f472dc708e8af7d2b537"
	I1013 21:21:10.520976  240863 cri.go:89] found id: "a21bb2b294cead5d90e3f5593637bc6716719945f5e23d06cf01617fdee3e75e"
	I1013 21:21:10.520979  240863 cri.go:89] found id: "278b4b7546c8cbb271aa40024385c9a2115953314e4b6fd1f291d9686c2f7374"
	I1013 21:21:10.520983  240863 cri.go:89] found id: "e208e9862015d533dd0968f46404f89356f7ec7b3132965b94044f2e6a69cf3b"
	I1013 21:21:10.520985  240863 cri.go:89] found id: "4a3e089044a38a8fa3d24e8f2669f6febf78e7a168cc30e9e798a894d91d7057"
	I1013 21:21:10.520987  240863 cri.go:89] found id: "ac355aa00aaae919e93c529f16cc645b44f261b30a90efb74492d107645c316b"
	I1013 21:21:10.521002  240863 cri.go:89] found id: "fc72bcf650d5a023cf7f3dc7bac0e28433de3d691982471fd25a7d139667d786"
	I1013 21:21:10.521006  240863 cri.go:89] found id: "4f9c304b23eabe02395b1858ed2ac76623d0b6f4887bb6ee139a97ff4d2ea01e"
	I1013 21:21:10.521010  240863 cri.go:89] found id: "c0af2973488b6dc3da55356729afdb472a4832a5e5acfe7dee831dc817711363"
	I1013 21:21:10.521014  240863 cri.go:89] found id: "6cbf217264895ef9824d482611b2de76f8f55105f2f0de78e29b7723da223ae9"
	I1013 21:21:10.521017  240863 cri.go:89] found id: ""
	I1013 21:21:10.521065  240863 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 21:21:10.535644  240863 out.go:203] 
	W1013 21:21:10.537218  240863 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T21:21:10Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T21:21:10Z" level=error msg="open /run/runc: no such file or directory"
	
	W1013 21:21:10.537237  240863 out.go:285] * 
	* 
	W1013 21:21:10.540253  240863 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1013 21:21:10.541703  240863 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-amd64 addons enable headlamp -p addons-143775 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-143775
helpers_test.go:243: (dbg) docker inspect addons-143775:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "541f9fcc19e3cfb62f371a3d70f52d04352b4b1c1570742330b1a02e20d8a8c1",
	        "Created": "2025-10-13T21:18:52.880794569Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 232908,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-13T21:18:52.91981774Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:496f866fde942b6f85dc3d23428f27ed11a5cd69b522133c43b8dc97e7575c9e",
	        "ResolvConfPath": "/var/lib/docker/containers/541f9fcc19e3cfb62f371a3d70f52d04352b4b1c1570742330b1a02e20d8a8c1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/541f9fcc19e3cfb62f371a3d70f52d04352b4b1c1570742330b1a02e20d8a8c1/hostname",
	        "HostsPath": "/var/lib/docker/containers/541f9fcc19e3cfb62f371a3d70f52d04352b4b1c1570742330b1a02e20d8a8c1/hosts",
	        "LogPath": "/var/lib/docker/containers/541f9fcc19e3cfb62f371a3d70f52d04352b4b1c1570742330b1a02e20d8a8c1/541f9fcc19e3cfb62f371a3d70f52d04352b4b1c1570742330b1a02e20d8a8c1-json.log",
	        "Name": "/addons-143775",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-143775:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-143775",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "541f9fcc19e3cfb62f371a3d70f52d04352b4b1c1570742330b1a02e20d8a8c1",
	                "LowerDir": "/var/lib/docker/overlay2/d5ae37240a7894fc9d462336fb8242eb8d870b0241d674ba67879f6a4f41cbe2-init/diff:/var/lib/docker/overlay2/d6236b573fee274727d414fe7dfb0718c2f1a4b8ebed995b4196b3231a8d31a4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d5ae37240a7894fc9d462336fb8242eb8d870b0241d674ba67879f6a4f41cbe2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d5ae37240a7894fc9d462336fb8242eb8d870b0241d674ba67879f6a4f41cbe2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d5ae37240a7894fc9d462336fb8242eb8d870b0241d674ba67879f6a4f41cbe2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-143775",
	                "Source": "/var/lib/docker/volumes/addons-143775/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-143775",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-143775",
	                "name.minikube.sigs.k8s.io": "addons-143775",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "99ed5aa952ed99b68aef33c633333fdfdd4632dee17b0907a84d2df70a94220e",
	            "SandboxKey": "/var/run/docker/netns/99ed5aa952ed",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-143775": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "36:ab:16:a2:ad:68",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b6cb13af425017a4154ca14bd547d1a6dd94adbcf73f90e6de0d88aea7818eb1",
	                    "EndpointID": "854498c27153985a651286f2f972df7bc39ab2aba4fa5217184799f1a30e7ce5",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-143775",
	                        "541f9fcc19e3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-143775 -n addons-143775
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-143775 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-143775 logs -n 25: (1.1603432s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-941848 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-941848   │ jenkins │ v1.37.0 │ 13 Oct 25 21:18 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 13 Oct 25 21:18 UTC │ 13 Oct 25 21:18 UTC │
	│ delete  │ -p download-only-941848                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-941848   │ jenkins │ v1.37.0 │ 13 Oct 25 21:18 UTC │ 13 Oct 25 21:18 UTC │
	│ start   │ -o=json --download-only -p download-only-318241 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-318241   │ jenkins │ v1.37.0 │ 13 Oct 25 21:18 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 13 Oct 25 21:18 UTC │ 13 Oct 25 21:18 UTC │
	│ delete  │ -p download-only-318241                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-318241   │ jenkins │ v1.37.0 │ 13 Oct 25 21:18 UTC │ 13 Oct 25 21:18 UTC │
	│ delete  │ -p download-only-941848                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-941848   │ jenkins │ v1.37.0 │ 13 Oct 25 21:18 UTC │ 13 Oct 25 21:18 UTC │
	│ delete  │ -p download-only-318241                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-318241   │ jenkins │ v1.37.0 │ 13 Oct 25 21:18 UTC │ 13 Oct 25 21:18 UTC │
	│ start   │ --download-only -p download-docker-704567 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-704567 │ jenkins │ v1.37.0 │ 13 Oct 25 21:18 UTC │                     │
	│ delete  │ -p download-docker-704567                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-704567 │ jenkins │ v1.37.0 │ 13 Oct 25 21:18 UTC │ 13 Oct 25 21:18 UTC │
	│ start   │ --download-only -p binary-mirror-784602 --alsologtostderr --binary-mirror http://127.0.0.1:46779 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-784602   │ jenkins │ v1.37.0 │ 13 Oct 25 21:18 UTC │                     │
	│ delete  │ -p binary-mirror-784602                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-784602   │ jenkins │ v1.37.0 │ 13 Oct 25 21:18 UTC │ 13 Oct 25 21:18 UTC │
	│ addons  │ disable dashboard -p addons-143775                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-143775          │ jenkins │ v1.37.0 │ 13 Oct 25 21:18 UTC │                     │
	│ addons  │ enable dashboard -p addons-143775                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-143775          │ jenkins │ v1.37.0 │ 13 Oct 25 21:18 UTC │                     │
	│ start   │ -p addons-143775 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-143775          │ jenkins │ v1.37.0 │ 13 Oct 25 21:18 UTC │ 13 Oct 25 21:21 UTC │
	│ addons  │ addons-143775 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-143775          │ jenkins │ v1.37.0 │ 13 Oct 25 21:21 UTC │                     │
	│ addons  │ addons-143775 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-143775          │ jenkins │ v1.37.0 │ 13 Oct 25 21:21 UTC │                     │
	│ addons  │ enable headlamp -p addons-143775 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-143775          │ jenkins │ v1.37.0 │ 13 Oct 25 21:21 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 21:18:29
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 21:18:29.147526  232267 out.go:360] Setting OutFile to fd 1 ...
	I1013 21:18:29.147770  232267 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:18:29.147778  232267 out.go:374] Setting ErrFile to fd 2...
	I1013 21:18:29.147782  232267 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:18:29.147987  232267 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-226873/.minikube/bin
	I1013 21:18:29.148567  232267 out.go:368] Setting JSON to false
	I1013 21:18:29.149409  232267 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3657,"bootTime":1760386652,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1013 21:18:29.149509  232267 start.go:141] virtualization: kvm guest
	I1013 21:18:29.151721  232267 out.go:179] * [addons-143775] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1013 21:18:29.153116  232267 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 21:18:29.153156  232267 notify.go:220] Checking for updates...
	I1013 21:18:29.156011  232267 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 21:18:29.157458  232267 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-226873/kubeconfig
	I1013 21:18:29.158915  232267 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-226873/.minikube
	I1013 21:18:29.160458  232267 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1013 21:18:29.162155  232267 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 21:18:29.163972  232267 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 21:18:29.187801  232267 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1013 21:18:29.187888  232267 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 21:18:29.244400  232267 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:46 SystemTime:2025-10-13 21:18:29.233917433 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652166656 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1013 21:18:29.244506  232267 docker.go:318] overlay module found
	I1013 21:18:29.246528  232267 out.go:179] * Using the docker driver based on user configuration
	I1013 21:18:29.247881  232267 start.go:305] selected driver: docker
	I1013 21:18:29.247896  232267 start.go:925] validating driver "docker" against <nil>
	I1013 21:18:29.247911  232267 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 21:18:29.248484  232267 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 21:18:29.308417  232267 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:46 SystemTime:2025-10-13 21:18:29.298843777 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652166656 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1013 21:18:29.308608  232267 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1013 21:18:29.308808  232267 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 21:18:29.310786  232267 out.go:179] * Using Docker driver with root privileges
	I1013 21:18:29.312109  232267 cni.go:84] Creating CNI manager for ""
	I1013 21:18:29.312175  232267 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 21:18:29.312186  232267 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1013 21:18:29.312269  232267 start.go:349] cluster config:
	{Name:addons-143775 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-143775 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1013 21:18:29.313557  232267 out.go:179] * Starting "addons-143775" primary control-plane node in "addons-143775" cluster
	I1013 21:18:29.314888  232267 cache.go:123] Beginning downloading kic base image for docker with crio
	I1013 21:18:29.316147  232267 out.go:179] * Pulling base image v0.0.48-1760363564-21724 ...
	I1013 21:18:29.317190  232267 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 21:18:29.317232  232267 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-226873/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1013 21:18:29.317244  232267 cache.go:58] Caching tarball of preloaded images
	I1013 21:18:29.317320  232267 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon
	I1013 21:18:29.317342  232267 preload.go:233] Found /home/jenkins/minikube-integration/21724-226873/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1013 21:18:29.317350  232267 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1013 21:18:29.317688  232267 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/addons-143775/config.json ...
	I1013 21:18:29.317713  232267 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/addons-143775/config.json: {Name:mk86885072ff6639c3332c248fc6f7264e47968c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 21:18:29.333580  232267 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 to local cache
	I1013 21:18:29.333720  232267 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local cache directory
	I1013 21:18:29.333736  232267 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local cache directory, skipping pull
	I1013 21:18:29.333741  232267 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 exists in cache, skipping pull
	I1013 21:18:29.333749  232267 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 as a tarball
	I1013 21:18:29.333753  232267 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 from local cache
	I1013 21:18:42.074621  232267 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 from cached tarball
	I1013 21:18:42.074662  232267 cache.go:232] Successfully downloaded all kic artifacts
	I1013 21:18:42.074715  232267 start.go:360] acquireMachinesLock for addons-143775: {Name:mk6f74072f84c857b4a9fd47fd2ff103ee669eed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 21:18:42.074840  232267 start.go:364] duration metric: took 101.596µs to acquireMachinesLock for "addons-143775"
	I1013 21:18:42.074867  232267 start.go:93] Provisioning new machine with config: &{Name:addons-143775 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-143775 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1013 21:18:42.074952  232267 start.go:125] createHost starting for "" (driver="docker")
	I1013 21:18:42.076987  232267 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1013 21:18:42.077257  232267 start.go:159] libmachine.API.Create for "addons-143775" (driver="docker")
	I1013 21:18:42.077292  232267 client.go:168] LocalClient.Create starting
	I1013 21:18:42.077398  232267 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem
	I1013 21:18:42.169983  232267 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/cert.pem
	I1013 21:18:42.339369  232267 cli_runner.go:164] Run: docker network inspect addons-143775 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1013 21:18:42.356062  232267 cli_runner.go:211] docker network inspect addons-143775 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1013 21:18:42.356149  232267 network_create.go:284] running [docker network inspect addons-143775] to gather additional debugging logs...
	I1013 21:18:42.356176  232267 cli_runner.go:164] Run: docker network inspect addons-143775
	W1013 21:18:42.373346  232267 cli_runner.go:211] docker network inspect addons-143775 returned with exit code 1
	I1013 21:18:42.373379  232267 network_create.go:287] error running [docker network inspect addons-143775]: docker network inspect addons-143775: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-143775 not found
	I1013 21:18:42.373393  232267 network_create.go:289] output of [docker network inspect addons-143775]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-143775 not found
	
	** /stderr **
	I1013 21:18:42.373479  232267 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 21:18:42.392347  232267 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001bf6dc0}
	I1013 21:18:42.392397  232267 network_create.go:124] attempt to create docker network addons-143775 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1013 21:18:42.392452  232267 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-143775 addons-143775
	I1013 21:18:42.451093  232267 network_create.go:108] docker network addons-143775 192.168.49.0/24 created
	I1013 21:18:42.451132  232267 kic.go:121] calculated static IP "192.168.49.2" for the "addons-143775" container
	I1013 21:18:42.451209  232267 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1013 21:18:42.467750  232267 cli_runner.go:164] Run: docker volume create addons-143775 --label name.minikube.sigs.k8s.io=addons-143775 --label created_by.minikube.sigs.k8s.io=true
	I1013 21:18:42.488372  232267 oci.go:103] Successfully created a docker volume addons-143775
	I1013 21:18:42.488448  232267 cli_runner.go:164] Run: docker run --rm --name addons-143775-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-143775 --entrypoint /usr/bin/test -v addons-143775:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -d /var/lib
	I1013 21:18:48.444317  232267 cli_runner.go:217] Completed: docker run --rm --name addons-143775-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-143775 --entrypoint /usr/bin/test -v addons-143775:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -d /var/lib: (5.95582857s)
	I1013 21:18:48.444346  232267 oci.go:107] Successfully prepared a docker volume addons-143775
	I1013 21:18:48.444397  232267 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 21:18:48.444422  232267 kic.go:194] Starting extracting preloaded images to volume ...
	I1013 21:18:48.444471  232267 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21724-226873/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-143775:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -I lz4 -xf /preloaded.tar -C /extractDir
	I1013 21:18:52.806171  232267 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21724-226873/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-143775:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -I lz4 -xf /preloaded.tar -C /extractDir: (4.361659169s)
	I1013 21:18:52.806208  232267 kic.go:203] duration metric: took 4.361780531s to extract preloaded images to volume ...
	W1013 21:18:52.806321  232267 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1013 21:18:52.806379  232267 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1013 21:18:52.806439  232267 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1013 21:18:52.865156  232267 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-143775 --name addons-143775 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-143775 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-143775 --network addons-143775 --ip 192.168.49.2 --volume addons-143775:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225
	I1013 21:18:53.169603  232267 cli_runner.go:164] Run: docker container inspect addons-143775 --format={{.State.Running}}
	I1013 21:18:53.188321  232267 cli_runner.go:164] Run: docker container inspect addons-143775 --format={{.State.Status}}
	I1013 21:18:53.206791  232267 cli_runner.go:164] Run: docker exec addons-143775 stat /var/lib/dpkg/alternatives/iptables
	I1013 21:18:53.249460  232267 oci.go:144] the created container "addons-143775" has a running status.
	I1013 21:18:53.249498  232267 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21724-226873/.minikube/machines/addons-143775/id_rsa...
	I1013 21:18:53.498970  232267 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21724-226873/.minikube/machines/addons-143775/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1013 21:18:53.527121  232267 cli_runner.go:164] Run: docker container inspect addons-143775 --format={{.State.Status}}
	I1013 21:18:53.546199  232267 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1013 21:18:53.546218  232267 kic_runner.go:114] Args: [docker exec --privileged addons-143775 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1013 21:18:53.592450  232267 cli_runner.go:164] Run: docker container inspect addons-143775 --format={{.State.Status}}
	I1013 21:18:53.611539  232267 machine.go:93] provisionDockerMachine start ...
	I1013 21:18:53.611634  232267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-143775
	I1013 21:18:53.631081  232267 main.go:141] libmachine: Using SSH client type: native
	I1013 21:18:53.631436  232267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1013 21:18:53.631455  232267 main.go:141] libmachine: About to run SSH command:
	hostname
	I1013 21:18:53.766806  232267 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-143775
	
	I1013 21:18:53.766839  232267 ubuntu.go:182] provisioning hostname "addons-143775"
	I1013 21:18:53.766908  232267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-143775
	I1013 21:18:53.784799  232267 main.go:141] libmachine: Using SSH client type: native
	I1013 21:18:53.785107  232267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1013 21:18:53.785127  232267 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-143775 && echo "addons-143775" | sudo tee /etc/hostname
	I1013 21:18:53.931488  232267 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-143775
	
	I1013 21:18:53.931573  232267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-143775
	I1013 21:18:53.949462  232267 main.go:141] libmachine: Using SSH client type: native
	I1013 21:18:53.949686  232267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1013 21:18:53.949704  232267 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-143775' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-143775/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-143775' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1013 21:18:54.085163  232267 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 21:18:54.085196  232267 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21724-226873/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-226873/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-226873/.minikube}
	I1013 21:18:54.085216  232267 ubuntu.go:190] setting up certificates
	I1013 21:18:54.085231  232267 provision.go:84] configureAuth start
	I1013 21:18:54.085292  232267 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-143775
	I1013 21:18:54.102149  232267 provision.go:143] copyHostCerts
	I1013 21:18:54.102225  232267 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-226873/.minikube/ca.pem (1078 bytes)
	I1013 21:18:54.102333  232267 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-226873/.minikube/cert.pem (1123 bytes)
	I1013 21:18:54.102408  232267 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-226873/.minikube/key.pem (1679 bytes)
	I1013 21:18:54.102470  232267 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-226873/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca-key.pem org=jenkins.addons-143775 san=[127.0.0.1 192.168.49.2 addons-143775 localhost minikube]
	I1013 21:18:54.690134  232267 provision.go:177] copyRemoteCerts
	I1013 21:18:54.690198  232267 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1013 21:18:54.690235  232267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-143775
	I1013 21:18:54.707909  232267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/addons-143775/id_rsa Username:docker}
	I1013 21:18:54.805379  232267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1013 21:18:54.824930  232267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1013 21:18:54.842129  232267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1013 21:18:54.859907  232267 provision.go:87] duration metric: took 774.657382ms to configureAuth
	I1013 21:18:54.859937  232267 ubuntu.go:206] setting minikube options for container-runtime
	I1013 21:18:54.860132  232267 config.go:182] Loaded profile config "addons-143775": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:18:54.860234  232267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-143775
	I1013 21:18:54.878285  232267 main.go:141] libmachine: Using SSH client type: native
	I1013 21:18:54.878568  232267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1013 21:18:54.878598  232267 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1013 21:18:55.129338  232267 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1013 21:18:55.129365  232267 machine.go:96] duration metric: took 1.517805295s to provisionDockerMachine
	I1013 21:18:55.129376  232267 client.go:171] duration metric: took 13.05207559s to LocalClient.Create
	I1013 21:18:55.129399  232267 start.go:167] duration metric: took 13.05214319s to libmachine.API.Create "addons-143775"
	I1013 21:18:55.129409  232267 start.go:293] postStartSetup for "addons-143775" (driver="docker")
	I1013 21:18:55.129422  232267 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1013 21:18:55.129495  232267 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1013 21:18:55.129535  232267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-143775
	I1013 21:18:55.147012  232267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/addons-143775/id_rsa Username:docker}
	I1013 21:18:55.246440  232267 ssh_runner.go:195] Run: cat /etc/os-release
	I1013 21:18:55.250304  232267 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1013 21:18:55.250336  232267 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1013 21:18:55.250361  232267 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-226873/.minikube/addons for local assets ...
	I1013 21:18:55.250428  232267 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-226873/.minikube/files for local assets ...
	I1013 21:18:55.250463  232267 start.go:296] duration metric: took 121.046279ms for postStartSetup
	I1013 21:18:55.250838  232267 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-143775
	I1013 21:18:55.269349  232267 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/addons-143775/config.json ...
	I1013 21:18:55.269600  232267 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1013 21:18:55.269644  232267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-143775
	I1013 21:18:55.287756  232267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/addons-143775/id_rsa Username:docker}
	I1013 21:18:55.382649  232267 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1013 21:18:55.387628  232267 start.go:128] duration metric: took 13.312657051s to createHost
	I1013 21:18:55.387663  232267 start.go:83] releasing machines lock for "addons-143775", held for 13.312808942s
	I1013 21:18:55.387741  232267 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-143775
	I1013 21:18:55.405333  232267 ssh_runner.go:195] Run: cat /version.json
	I1013 21:18:55.405390  232267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-143775
	I1013 21:18:55.405430  232267 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1013 21:18:55.405502  232267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-143775
	I1013 21:18:55.423559  232267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/addons-143775/id_rsa Username:docker}
	I1013 21:18:55.424434  232267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/addons-143775/id_rsa Username:docker}
	I1013 21:18:55.589342  232267 ssh_runner.go:195] Run: systemctl --version
	I1013 21:18:55.596018  232267 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1013 21:18:55.631915  232267 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1013 21:18:55.637016  232267 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1013 21:18:55.637109  232267 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1013 21:18:55.664188  232267 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1013 21:18:55.664211  232267 start.go:495] detecting cgroup driver to use...
	I1013 21:18:55.664242  232267 detect.go:190] detected "systemd" cgroup driver on host os
	I1013 21:18:55.664287  232267 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1013 21:18:55.681064  232267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1013 21:18:55.693317  232267 docker.go:218] disabling cri-docker service (if available) ...
	I1013 21:18:55.693377  232267 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1013 21:18:55.711205  232267 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1013 21:18:55.728254  232267 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1013 21:18:55.809453  232267 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1013 21:18:55.897555  232267 docker.go:234] disabling docker service ...
	I1013 21:18:55.897624  232267 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1013 21:18:55.916736  232267 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1013 21:18:55.929287  232267 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1013 21:18:56.010190  232267 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1013 21:18:56.090789  232267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1013 21:18:56.103736  232267 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1013 21:18:56.118082  232267 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1013 21:18:56.118220  232267 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 21:18:56.128671  232267 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1013 21:18:56.128742  232267 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 21:18:56.137825  232267 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 21:18:56.146681  232267 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 21:18:56.155657  232267 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1013 21:18:56.163628  232267 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 21:18:56.172429  232267 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 21:18:56.185874  232267 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 21:18:56.194984  232267 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1013 21:18:56.202230  232267 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1013 21:18:56.202302  232267 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1013 21:18:56.215132  232267 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1013 21:18:56.223049  232267 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 21:18:56.301730  232267 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1013 21:18:56.410733  232267 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1013 21:18:56.410828  232267 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1013 21:18:56.414875  232267 start.go:563] Will wait 60s for crictl version
	I1013 21:18:56.414936  232267 ssh_runner.go:195] Run: which crictl
	I1013 21:18:56.418378  232267 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1013 21:18:56.443823  232267 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1013 21:18:56.443945  232267 ssh_runner.go:195] Run: crio --version
	I1013 21:18:56.473167  232267 ssh_runner.go:195] Run: crio --version
	I1013 21:18:56.503689  232267 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1013 21:18:56.505042  232267 cli_runner.go:164] Run: docker network inspect addons-143775 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 21:18:56.522101  232267 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1013 21:18:56.526661  232267 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 21:18:56.537178  232267 kubeadm.go:883] updating cluster {Name:addons-143775 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-143775 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1013 21:18:56.537306  232267 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 21:18:56.537351  232267 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 21:18:56.569234  232267 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 21:18:56.569256  232267 crio.go:433] Images already preloaded, skipping extraction
	I1013 21:18:56.569323  232267 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 21:18:56.595801  232267 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 21:18:56.595824  232267 cache_images.go:85] Images are preloaded, skipping loading
	I1013 21:18:56.595834  232267 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1013 21:18:56.595931  232267 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-143775 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-143775 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1013 21:18:56.596010  232267 ssh_runner.go:195] Run: crio config
	I1013 21:18:56.644147  232267 cni.go:84] Creating CNI manager for ""
	I1013 21:18:56.644171  232267 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 21:18:56.644192  232267 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1013 21:18:56.644215  232267 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-143775 NodeName:addons-143775 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1013 21:18:56.644343  232267 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-143775"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1013 21:18:56.644406  232267 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1013 21:18:56.652843  232267 binaries.go:44] Found k8s binaries, skipping transfer
	I1013 21:18:56.652917  232267 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1013 21:18:56.660599  232267 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1013 21:18:56.673353  232267 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1013 21:18:56.689193  232267 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1013 21:18:56.701807  232267 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1013 21:18:56.705528  232267 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 21:18:56.715395  232267 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 21:18:56.796490  232267 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 21:18:56.819561  232267 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/addons-143775 for IP: 192.168.49.2
	I1013 21:18:56.819590  232267 certs.go:195] generating shared ca certs ...
	I1013 21:18:56.819614  232267 certs.go:227] acquiring lock for ca certs: {Name:mk5abdb742abbab05bc35d961f97579f54806d56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 21:18:56.819794  232267 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-226873/.minikube/ca.key
	I1013 21:18:57.001219  232267 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-226873/.minikube/ca.crt ...
	I1013 21:18:57.001251  232267 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/ca.crt: {Name:mk442cacdce4a6ea7cb8d8b5f3e18c2cb5b41a72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 21:18:57.001465  232267 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-226873/.minikube/ca.key ...
	I1013 21:18:57.001484  232267 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/ca.key: {Name:mk947158710d75502a659246e73cfaf047ddaa6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 21:18:57.001606  232267 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-226873/.minikube/proxy-client-ca.key
	I1013 21:18:57.234866  232267 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-226873/.minikube/proxy-client-ca.crt ...
	I1013 21:18:57.234897  232267 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/proxy-client-ca.crt: {Name:mkd17add24a0f0553350cea006f2a6bd06f30ab5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 21:18:57.235127  232267 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-226873/.minikube/proxy-client-ca.key ...
	I1013 21:18:57.235145  232267 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/proxy-client-ca.key: {Name:mk7e152fded2964a3684c36e3bb4e18c4de83b1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 21:18:57.235938  232267 certs.go:257] generating profile certs ...
	I1013 21:18:57.236030  232267 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/addons-143775/client.key
	I1013 21:18:57.236046  232267 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/addons-143775/client.crt with IP's: []
	I1013 21:18:57.491047  232267 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/addons-143775/client.crt ...
	I1013 21:18:57.491083  232267 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/addons-143775/client.crt: {Name:mk47d1fb7df9c27d51cd11a02a41a0743b4626a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 21:18:57.492031  232267 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/addons-143775/client.key ...
	I1013 21:18:57.492058  232267 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/addons-143775/client.key: {Name:mkd68ac8b82a5a1fc9ca1e02e750598a36d378bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 21:18:57.492181  232267 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/addons-143775/apiserver.key.8af6ed4f
	I1013 21:18:57.492209  232267 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/addons-143775/apiserver.crt.8af6ed4f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1013 21:18:57.779964  232267 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/addons-143775/apiserver.crt.8af6ed4f ...
	I1013 21:18:57.780021  232267 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/addons-143775/apiserver.crt.8af6ed4f: {Name:mk5474854b39339081ddc249e46c8872150290d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 21:18:57.780235  232267 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/addons-143775/apiserver.key.8af6ed4f ...
	I1013 21:18:57.780251  232267 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/addons-143775/apiserver.key.8af6ed4f: {Name:mk2e53c5b1006a3824e0dcf9d2a9e6f2dd7dc117 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 21:18:57.781247  232267 certs.go:382] copying /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/addons-143775/apiserver.crt.8af6ed4f -> /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/addons-143775/apiserver.crt
	I1013 21:18:57.781358  232267 certs.go:386] copying /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/addons-143775/apiserver.key.8af6ed4f -> /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/addons-143775/apiserver.key
	I1013 21:18:57.781417  232267 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/addons-143775/proxy-client.key
	I1013 21:18:57.781438  232267 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/addons-143775/proxy-client.crt with IP's: []
	I1013 21:18:58.073898  232267 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/addons-143775/proxy-client.crt ...
	I1013 21:18:58.073934  232267 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/addons-143775/proxy-client.crt: {Name:mk29cd6287fc0bc16dd0ea89fa692a61a7cf9e2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 21:18:58.074809  232267 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/addons-143775/proxy-client.key ...
	I1013 21:18:58.074834  232267 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/addons-143775/proxy-client.key: {Name:mk8a49e4c36d349fc420cf9cb89bbc074ddbfed2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 21:18:58.075059  232267 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca-key.pem (1675 bytes)
	I1013 21:18:58.075095  232267 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem (1078 bytes)
	I1013 21:18:58.075118  232267 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/cert.pem (1123 bytes)
	I1013 21:18:58.075145  232267 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/key.pem (1679 bytes)
	I1013 21:18:58.075871  232267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1013 21:18:58.094651  232267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1013 21:18:58.112187  232267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1013 21:18:58.129591  232267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1013 21:18:58.147216  232267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/addons-143775/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1013 21:18:58.164640  232267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/addons-143775/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1013 21:18:58.182581  232267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/addons-143775/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1013 21:18:58.200661  232267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/addons-143775/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1013 21:18:58.219100  232267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1013 21:18:58.239390  232267 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1013 21:18:58.252743  232267 ssh_runner.go:195] Run: openssl version
	I1013 21:18:58.258948  232267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1013 21:18:58.270635  232267 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1013 21:18:58.274847  232267 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 13 21:18 /usr/share/ca-certificates/minikubeCA.pem
	I1013 21:18:58.274919  232267 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1013 21:18:58.309458  232267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1013 21:18:58.318459  232267 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1013 21:18:58.322202  232267 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1013 21:18:58.322277  232267 kubeadm.go:400] StartCluster: {Name:addons-143775 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-143775 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 21:18:58.322355  232267 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 21:18:58.322400  232267 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 21:18:58.350150  232267 cri.go:89] found id: ""
	I1013 21:18:58.350235  232267 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1013 21:18:58.358583  232267 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1013 21:18:58.366632  232267 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1013 21:18:58.366700  232267 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1013 21:18:58.374505  232267 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1013 21:18:58.374528  232267 kubeadm.go:157] found existing configuration files:
	
	I1013 21:18:58.374581  232267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1013 21:18:58.382287  232267 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1013 21:18:58.382354  232267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1013 21:18:58.390247  232267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1013 21:18:58.397657  232267 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1013 21:18:58.397720  232267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1013 21:18:58.404977  232267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1013 21:18:58.412539  232267 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1013 21:18:58.412612  232267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1013 21:18:58.419857  232267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1013 21:18:58.427350  232267 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1013 21:18:58.427410  232267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1013 21:18:58.435923  232267 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1013 21:18:58.475126  232267 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1013 21:18:58.475233  232267 kubeadm.go:318] [preflight] Running pre-flight checks
	I1013 21:18:58.497722  232267 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1013 21:18:58.497815  232267 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1013 21:18:58.497909  232267 kubeadm.go:318] OS: Linux
	I1013 21:18:58.498030  232267 kubeadm.go:318] CGROUPS_CPU: enabled
	I1013 21:18:58.498132  232267 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1013 21:18:58.498195  232267 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1013 21:18:58.498256  232267 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1013 21:18:58.498326  232267 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1013 21:18:58.498389  232267 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1013 21:18:58.498449  232267 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1013 21:18:58.498505  232267 kubeadm.go:318] CGROUPS_IO: enabled
	I1013 21:18:58.555080  232267 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1013 21:18:58.555232  232267 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1013 21:18:58.555335  232267 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1013 21:18:58.563650  232267 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1013 21:18:58.566746  232267 out.go:252]   - Generating certificates and keys ...
	I1013 21:18:58.566858  232267 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1013 21:18:58.566949  232267 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1013 21:18:58.883483  232267 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1013 21:18:59.078252  232267 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1013 21:18:59.368868  232267 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1013 21:18:59.557505  232267 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1013 21:18:59.848183  232267 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1013 21:18:59.848295  232267 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-143775 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1013 21:18:59.962229  232267 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1013 21:18:59.962395  232267 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-143775 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1013 21:19:00.153490  232267 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1013 21:19:00.459371  232267 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1013 21:19:00.647354  232267 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1013 21:19:00.647420  232267 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1013 21:19:00.809640  232267 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1013 21:19:01.006072  232267 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1013 21:19:01.299669  232267 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1013 21:19:01.368197  232267 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1013 21:19:01.436233  232267 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1013 21:19:01.436977  232267 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1013 21:19:01.441684  232267 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1013 21:19:01.443941  232267 out.go:252]   - Booting up control plane ...
	I1013 21:19:01.444093  232267 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1013 21:19:01.444252  232267 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1013 21:19:01.444358  232267 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1013 21:19:01.461873  232267 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1013 21:19:01.461973  232267 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1013 21:19:01.469052  232267 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1013 21:19:01.470086  232267 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1013 21:19:01.470157  232267 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1013 21:19:01.569452  232267 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1013 21:19:01.569631  232267 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1013 21:19:02.570459  232267 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.000955025s
	I1013 21:19:02.574009  232267 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1013 21:19:02.574154  232267 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1013 21:19:02.574266  232267 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1013 21:19:02.574405  232267 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1013 21:19:04.539550  232267 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.965534565s
	I1013 21:19:05.177437  232267 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.603531217s
	I1013 21:19:06.075120  232267 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 3.501172935s
	I1013 21:19:06.086622  232267 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1013 21:19:06.097933  232267 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1013 21:19:06.107625  232267 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1013 21:19:06.107907  232267 kubeadm.go:318] [mark-control-plane] Marking the node addons-143775 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1013 21:19:06.117237  232267 kubeadm.go:318] [bootstrap-token] Using token: mrcpwn.vva6go2h8n9djyuw
	I1013 21:19:06.118794  232267 out.go:252]   - Configuring RBAC rules ...
	I1013 21:19:06.118917  232267 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1013 21:19:06.122141  232267 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1013 21:19:06.127672  232267 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1013 21:19:06.131315  232267 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1013 21:19:06.133917  232267 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1013 21:19:06.136680  232267 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1013 21:19:06.480666  232267 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1013 21:19:06.901241  232267 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1013 21:19:07.482451  232267 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1013 21:19:07.483247  232267 kubeadm.go:318] 
	I1013 21:19:07.483312  232267 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1013 21:19:07.483320  232267 kubeadm.go:318] 
	I1013 21:19:07.483388  232267 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1013 21:19:07.483394  232267 kubeadm.go:318] 
	I1013 21:19:07.483419  232267 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1013 21:19:07.483476  232267 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1013 21:19:07.483538  232267 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1013 21:19:07.483551  232267 kubeadm.go:318] 
	I1013 21:19:07.483612  232267 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1013 21:19:07.483621  232267 kubeadm.go:318] 
	I1013 21:19:07.483690  232267 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1013 21:19:07.483699  232267 kubeadm.go:318] 
	I1013 21:19:07.483765  232267 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1013 21:19:07.483882  232267 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1013 21:19:07.483943  232267 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1013 21:19:07.483950  232267 kubeadm.go:318] 
	I1013 21:19:07.484104  232267 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1013 21:19:07.484228  232267 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1013 21:19:07.484270  232267 kubeadm.go:318] 
	I1013 21:19:07.484406  232267 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token mrcpwn.vva6go2h8n9djyuw \
	I1013 21:19:07.484545  232267 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:00bf5d6d0f4ef7bd9334b23e90d8fd2b7e452995fe6a96d4fd7aebd6540a9956 \
	I1013 21:19:07.484586  232267 kubeadm.go:318] 	--control-plane 
	I1013 21:19:07.484596  232267 kubeadm.go:318] 
	I1013 21:19:07.484708  232267 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1013 21:19:07.484717  232267 kubeadm.go:318] 
	I1013 21:19:07.484818  232267 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token mrcpwn.vva6go2h8n9djyuw \
	I1013 21:19:07.484955  232267 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:00bf5d6d0f4ef7bd9334b23e90d8fd2b7e452995fe6a96d4fd7aebd6540a9956 
	I1013 21:19:07.487290  232267 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1013 21:19:07.487441  232267 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1013 21:19:07.487494  232267 cni.go:84] Creating CNI manager for ""
	I1013 21:19:07.487516  232267 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 21:19:07.489286  232267 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1013 21:19:07.490633  232267 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1013 21:19:07.495132  232267 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1013 21:19:07.495151  232267 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1013 21:19:07.508587  232267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1013 21:19:07.726563  232267 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1013 21:19:07.726668  232267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 21:19:07.726701  232267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-143775 minikube.k8s.io/updated_at=2025_10_13T21_19_07_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=18273cc699fc357fbf4f93654efe4966698a9f22 minikube.k8s.io/name=addons-143775 minikube.k8s.io/primary=true
	I1013 21:19:07.738222  232267 ops.go:34] apiserver oom_adj: -16
	I1013 21:19:07.818042  232267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 21:19:08.318779  232267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 21:19:08.818314  232267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 21:19:09.318874  232267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 21:19:09.818354  232267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 21:19:10.319054  232267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 21:19:10.818141  232267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 21:19:11.319068  232267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 21:19:11.819093  232267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 21:19:12.318176  232267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 21:19:12.386449  232267 kubeadm.go:1113] duration metric: took 4.659843165s to wait for elevateKubeSystemPrivileges
	I1013 21:19:12.386497  232267 kubeadm.go:402] duration metric: took 14.064226802s to StartCluster
	I1013 21:19:12.386526  232267 settings.go:142] acquiring lock: {Name:mk13008e3b2fce0e368bddbf00d43b8340210d33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 21:19:12.386702  232267 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-226873/kubeconfig
	I1013 21:19:12.387314  232267 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/kubeconfig: {Name:mk2f336b13d09ff6e6da9e86905651541ce51ca0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 21:19:12.387506  232267 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1013 21:19:12.387537  232267 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1013 21:19:12.387593  232267 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1013 21:19:12.387766  232267 addons.go:69] Setting yakd=true in profile "addons-143775"
	I1013 21:19:12.387776  232267 config.go:182] Loaded profile config "addons-143775": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:19:12.387781  232267 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-143775"
	I1013 21:19:12.387794  232267 addons.go:238] Setting addon yakd=true in "addons-143775"
	I1013 21:19:12.387814  232267 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-143775"
	I1013 21:19:12.387829  232267 host.go:66] Checking if "addons-143775" exists ...
	I1013 21:19:12.387865  232267 host.go:66] Checking if "addons-143775" exists ...
	I1013 21:19:12.387876  232267 addons.go:69] Setting storage-provisioner=true in profile "addons-143775"
	I1013 21:19:12.387889  232267 addons.go:238] Setting addon storage-provisioner=true in "addons-143775"
	I1013 21:19:12.387876  232267 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-143775"
	I1013 21:19:12.387915  232267 host.go:66] Checking if "addons-143775" exists ...
	I1013 21:19:12.387900  232267 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-143775"
	I1013 21:19:12.388070  232267 addons.go:69] Setting metrics-server=true in profile "addons-143775"
	I1013 21:19:12.387971  232267 addons.go:69] Setting default-storageclass=true in profile "addons-143775"
	I1013 21:19:12.388094  232267 addons.go:69] Setting volcano=true in profile "addons-143775"
	I1013 21:19:12.388105  232267 addons.go:69] Setting registry-creds=true in profile "addons-143775"
	I1013 21:19:12.388111  232267 addons.go:238] Setting addon volcano=true in "addons-143775"
	I1013 21:19:12.388113  232267 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-143775"
	I1013 21:19:12.388116  232267 addons.go:69] Setting volumesnapshots=true in profile "addons-143775"
	I1013 21:19:12.388122  232267 addons.go:238] Setting addon registry-creds=true in "addons-143775"
	I1013 21:19:12.388130  232267 addons.go:238] Setting addon volumesnapshots=true in "addons-143775"
	I1013 21:19:12.388165  232267 host.go:66] Checking if "addons-143775" exists ...
	I1013 21:19:12.388189  232267 host.go:66] Checking if "addons-143775" exists ...
	I1013 21:19:12.388191  232267 host.go:66] Checking if "addons-143775" exists ...
	I1013 21:19:12.388207  232267 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-143775"
	I1013 21:19:12.388222  232267 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-143775"
	I1013 21:19:12.388509  232267 cli_runner.go:164] Run: docker container inspect addons-143775 --format={{.State.Status}}
	I1013 21:19:12.388509  232267 cli_runner.go:164] Run: docker container inspect addons-143775 --format={{.State.Status}}
	I1013 21:19:12.388571  232267 cli_runner.go:164] Run: docker container inspect addons-143775 --format={{.State.Status}}
	I1013 21:19:12.388096  232267 addons.go:238] Setting addon metrics-server=true in "addons-143775"
	I1013 21:19:12.388615  232267 host.go:66] Checking if "addons-143775" exists ...
	I1013 21:19:12.388671  232267 cli_runner.go:164] Run: docker container inspect addons-143775 --format={{.State.Status}}
	I1013 21:19:12.388675  232267 cli_runner.go:164] Run: docker container inspect addons-143775 --format={{.State.Status}}
	I1013 21:19:12.388750  232267 cli_runner.go:164] Run: docker container inspect addons-143775 --format={{.State.Status}}
	I1013 21:19:12.388079  232267 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-143775"
	I1013 21:19:12.388984  232267 host.go:66] Checking if "addons-143775" exists ...
	I1013 21:19:12.388018  232267 addons.go:69] Setting inspektor-gadget=true in profile "addons-143775"
	I1013 21:19:12.389169  232267 addons.go:238] Setting addon inspektor-gadget=true in "addons-143775"
	I1013 21:19:12.389192  232267 host.go:66] Checking if "addons-143775" exists ...
	I1013 21:19:12.389576  232267 cli_runner.go:164] Run: docker container inspect addons-143775 --format={{.State.Status}}
	I1013 21:19:12.389620  232267 cli_runner.go:164] Run: docker container inspect addons-143775 --format={{.State.Status}}
	I1013 21:19:12.388034  232267 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-143775"
	I1013 21:19:12.390115  232267 host.go:66] Checking if "addons-143775" exists ...
	I1013 21:19:12.388011  232267 addons.go:69] Setting ingress-dns=true in profile "addons-143775"
	I1013 21:19:12.390522  232267 addons.go:238] Setting addon ingress-dns=true in "addons-143775"
	I1013 21:19:12.390539  232267 cli_runner.go:164] Run: docker container inspect addons-143775 --format={{.State.Status}}
	I1013 21:19:12.390568  232267 host.go:66] Checking if "addons-143775" exists ...
	I1013 21:19:12.388008  232267 addons.go:69] Setting cloud-spanner=true in profile "addons-143775"
	I1013 21:19:12.390613  232267 addons.go:238] Setting addon cloud-spanner=true in "addons-143775"
	I1013 21:19:12.390627  232267 cli_runner.go:164] Run: docker container inspect addons-143775 --format={{.State.Status}}
	I1013 21:19:12.390650  232267 host.go:66] Checking if "addons-143775" exists ...
	I1013 21:19:12.387980  232267 addons.go:69] Setting gcp-auth=true in profile "addons-143775"
	I1013 21:19:12.390946  232267 mustload.go:65] Loading cluster: addons-143775
	I1013 21:19:12.388510  232267 cli_runner.go:164] Run: docker container inspect addons-143775 --format={{.State.Status}}
	I1013 21:19:12.388077  232267 addons.go:69] Setting registry=true in profile "addons-143775"
	I1013 21:19:12.391381  232267 addons.go:238] Setting addon registry=true in "addons-143775"
	I1013 21:19:12.391420  232267 host.go:66] Checking if "addons-143775" exists ...
	I1013 21:19:12.389072  232267 cli_runner.go:164] Run: docker container inspect addons-143775 --format={{.State.Status}}
	I1013 21:19:12.388003  232267 addons.go:69] Setting ingress=true in profile "addons-143775"
	I1013 21:19:12.391698  232267 addons.go:238] Setting addon ingress=true in "addons-143775"
	I1013 21:19:12.391740  232267 host.go:66] Checking if "addons-143775" exists ...
	I1013 21:19:12.391821  232267 out.go:179] * Verifying Kubernetes components...
	I1013 21:19:12.394548  232267 cli_runner.go:164] Run: docker container inspect addons-143775 --format={{.State.Status}}
	I1013 21:19:12.394917  232267 config.go:182] Loaded profile config "addons-143775": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:19:12.394962  232267 cli_runner.go:164] Run: docker container inspect addons-143775 --format={{.State.Status}}
	I1013 21:19:12.395668  232267 cli_runner.go:164] Run: docker container inspect addons-143775 --format={{.State.Status}}
	I1013 21:19:12.395739  232267 cli_runner.go:164] Run: docker container inspect addons-143775 --format={{.State.Status}}
	I1013 21:19:12.396161  232267 cli_runner.go:164] Run: docker container inspect addons-143775 --format={{.State.Status}}
	I1013 21:19:12.396459  232267 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 21:19:12.438336  232267 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1013 21:19:12.440526  232267 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1013 21:19:12.440655  232267 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1013 21:19:12.440857  232267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-143775
	I1013 21:19:12.444691  232267 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1013 21:19:12.446077  232267 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1013 21:19:12.446103  232267 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1013 21:19:12.446177  232267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-143775
	I1013 21:19:12.465046  232267 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1013 21:19:12.465185  232267 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1013 21:19:12.467483  232267 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1013 21:19:12.467509  232267 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1013 21:19:12.467586  232267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-143775
	I1013 21:19:12.469426  232267 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1013 21:19:12.469446  232267 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1013 21:19:12.469506  232267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-143775
	I1013 21:19:12.473485  232267 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1013 21:19:12.474383  232267 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1013 21:19:12.474712  232267 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1013 21:19:12.474735  232267 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1013 21:19:12.474802  232267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-143775
	I1013 21:19:12.477743  232267 addons.go:238] Setting addon default-storageclass=true in "addons-143775"
	I1013 21:19:12.477834  232267 host.go:66] Checking if "addons-143775" exists ...
	I1013 21:19:12.478032  232267 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-143775"
	I1013 21:19:12.478424  232267 host.go:66] Checking if "addons-143775" exists ...
	I1013 21:19:12.478357  232267 cli_runner.go:164] Run: docker container inspect addons-143775 --format={{.State.Status}}
	I1013 21:19:12.479181  232267 cli_runner.go:164] Run: docker container inspect addons-143775 --format={{.State.Status}}
	I1013 21:19:12.480489  232267 out.go:179]   - Using image docker.io/registry:3.0.0
	I1013 21:19:12.481676  232267 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1013 21:19:12.481700  232267 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1013 21:19:12.481757  232267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-143775
	I1013 21:19:12.485089  232267 host.go:66] Checking if "addons-143775" exists ...
	W1013 21:19:12.493225  232267 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1013 21:19:12.505444  232267 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1013 21:19:12.507062  232267 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1013 21:19:12.508823  232267 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1013 21:19:12.510687  232267 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1013 21:19:12.510903  232267 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1013 21:19:12.511192  232267 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1013 21:19:12.511308  232267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-143775
	I1013 21:19:12.512745  232267 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1013 21:19:12.512810  232267 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1013 21:19:12.512893  232267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-143775
	I1013 21:19:12.523603  232267 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1013 21:19:12.525086  232267 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1013 21:19:12.525118  232267 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1013 21:19:12.525191  232267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-143775
	I1013 21:19:12.526094  232267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/addons-143775/id_rsa Username:docker}
	I1013 21:19:12.526624  232267 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1013 21:19:12.530839  232267 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1013 21:19:12.532353  232267 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1013 21:19:12.532394  232267 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1013 21:19:12.534156  232267 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 21:19:12.534178  232267 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1013 21:19:12.534234  232267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-143775
	I1013 21:19:12.534796  232267 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1013 21:19:12.536094  232267 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1013 21:19:12.536112  232267 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1013 21:19:12.536117  232267 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1013 21:19:12.536131  232267 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1013 21:19:12.536188  232267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-143775
	I1013 21:19:12.536436  232267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-143775
	I1013 21:19:12.536944  232267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/addons-143775/id_rsa Username:docker}
	I1013 21:19:12.537890  232267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/addons-143775/id_rsa Username:docker}
	I1013 21:19:12.542663  232267 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1013 21:19:12.544906  232267 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1013 21:19:12.548057  232267 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1013 21:19:12.550519  232267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/addons-143775/id_rsa Username:docker}
	I1013 21:19:12.555953  232267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/addons-143775/id_rsa Username:docker}
	I1013 21:19:12.556679  232267 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1013 21:19:12.558159  232267 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1013 21:19:12.559291  232267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/addons-143775/id_rsa Username:docker}
	I1013 21:19:12.564394  232267 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1013 21:19:12.565795  232267 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1013 21:19:12.567037  232267 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1013 21:19:12.567062  232267 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1013 21:19:12.567132  232267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-143775
	I1013 21:19:12.570912  232267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/addons-143775/id_rsa Username:docker}
	I1013 21:19:12.576046  232267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/addons-143775/id_rsa Username:docker}
	I1013 21:19:12.585343  232267 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1013 21:19:12.586796  232267 out.go:179]   - Using image docker.io/busybox:stable
	I1013 21:19:12.587584  232267 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1013 21:19:12.587607  232267 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1013 21:19:12.587672  232267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-143775
	I1013 21:19:12.590079  232267 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1013 21:19:12.590103  232267 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1013 21:19:12.590163  232267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-143775
	I1013 21:19:12.597460  232267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/addons-143775/id_rsa Username:docker}
	I1013 21:19:12.609439  232267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/addons-143775/id_rsa Username:docker}
	I1013 21:19:12.622142  232267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/addons-143775/id_rsa Username:docker}
	I1013 21:19:12.629302  232267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/addons-143775/id_rsa Username:docker}
	I1013 21:19:12.633357  232267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/addons-143775/id_rsa Username:docker}
	I1013 21:19:12.637087  232267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/addons-143775/id_rsa Username:docker}
	I1013 21:19:12.644340  232267 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 21:19:12.647519  232267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/addons-143775/id_rsa Username:docker}
	I1013 21:19:12.728806  232267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1013 21:19:12.736704  232267 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1013 21:19:12.736739  232267 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1013 21:19:12.740874  232267 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1013 21:19:12.740898  232267 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1013 21:19:12.760547  232267 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1013 21:19:12.760580  232267 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1013 21:19:12.761914  232267 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1013 21:19:12.761963  232267 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1013 21:19:12.764879  232267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1013 21:19:12.773374  232267 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1013 21:19:12.773402  232267 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1013 21:19:12.777976  232267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1013 21:19:12.804773  232267 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1013 21:19:12.804807  232267 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1013 21:19:12.823742  232267 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1013 21:19:12.823777  232267 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1013 21:19:12.825195  232267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1013 21:19:12.827343  232267 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1013 21:19:12.827364  232267 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1013 21:19:12.830381  232267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 21:19:12.835258  232267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1013 21:19:12.837959  232267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1013 21:19:12.841572  232267 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1013 21:19:12.841649  232267 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1013 21:19:12.850062  232267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1013 21:19:12.857826  232267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1013 21:19:12.862808  232267 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1013 21:19:12.862908  232267 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1013 21:19:12.876079  232267 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1013 21:19:12.876209  232267 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1013 21:19:12.881458  232267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1013 21:19:12.883267  232267 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1013 21:19:12.883358  232267 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1013 21:19:12.887626  232267 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1013 21:19:12.887713  232267 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1013 21:19:12.893675  232267 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1013 21:19:12.893700  232267 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1013 21:19:12.931391  232267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 21:19:12.932311  232267 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1013 21:19:12.932390  232267 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1013 21:19:12.937787  232267 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1013 21:19:12.937815  232267 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1013 21:19:12.946827  232267 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1013 21:19:12.946913  232267 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1013 21:19:12.952719  232267 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1013 21:19:12.952745  232267 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1013 21:19:12.956135  232267 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1013 21:19:12.957389  232267 node_ready.go:35] waiting up to 6m0s for node "addons-143775" to be "Ready" ...
	I1013 21:19:12.975459  232267 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1013 21:19:12.975491  232267 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1013 21:19:13.022710  232267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1013 21:19:13.024236  232267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1013 21:19:13.047600  232267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1013 21:19:13.067978  232267 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1013 21:19:13.068023  232267 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1013 21:19:13.181486  232267 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1013 21:19:13.181513  232267 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1013 21:19:13.255511  232267 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1013 21:19:13.255612  232267 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1013 21:19:13.327950  232267 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1013 21:19:13.327980  232267 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1013 21:19:13.378650  232267 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1013 21:19:13.378743  232267 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1013 21:19:13.442126  232267 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1013 21:19:13.442344  232267 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1013 21:19:13.470751  232267 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-143775" context rescaled to 1 replicas
	I1013 21:19:13.495680  232267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1013 21:19:14.064133  232267 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.228836662s)
	I1013 21:19:14.064180  232267 addons.go:479] Verifying addon ingress=true in "addons-143775"
	I1013 21:19:14.064196  232267 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.226106902s)
	I1013 21:19:14.064292  232267 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.182810084s)
	I1013 21:19:14.064427  232267 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.133008106s)
	I1013 21:19:14.064252  232267 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.214107775s)
	W1013 21:19:14.064467  232267 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:19:14.064491  232267 retry.go:31] will retry after 247.153057ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:19:14.064275  232267 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.206420858s)
	I1013 21:19:14.064551  232267 addons.go:479] Verifying addon registry=true in "addons-143775"
	I1013 21:19:14.064656  232267 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.040377637s)
	I1013 21:19:14.064674  232267 addons.go:479] Verifying addon metrics-server=true in "addons-143775"
	I1013 21:19:14.064709  232267 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.041763375s)
	I1013 21:19:14.066335  232267 out.go:179] * Verifying ingress addon...
	I1013 21:19:14.067109  232267 out.go:179] * Verifying registry addon...
	I1013 21:19:14.067110  232267 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-143775 service yakd-dashboard -n yakd-dashboard
	
	I1013 21:19:14.068917  232267 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1013 21:19:14.069525  232267 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1013 21:19:14.073922  232267 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1013 21:19:14.073946  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:14.074441  232267 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1013 21:19:14.074461  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:14.312319  232267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 21:19:14.544334  232267 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.49667013s)
	W1013 21:19:14.544394  232267 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1013 21:19:14.544424  232267 retry.go:31] will retry after 268.468741ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1013 21:19:14.544599  232267 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.048874196s)
	I1013 21:19:14.544641  232267 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-143775"
	I1013 21:19:14.546469  232267 out.go:179] * Verifying csi-hostpath-driver addon...
	I1013 21:19:14.548967  232267 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1013 21:19:14.552244  232267 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1013 21:19:14.552267  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:14.571552  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:14.571638  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:14.813701  232267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	W1013 21:19:14.948797  232267 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:19:14.948835  232267 retry.go:31] will retry after 494.855457ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1013 21:19:14.960754  232267 node_ready.go:57] node "addons-143775" has "Ready":"False" status (will retry)
	I1013 21:19:15.053208  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:15.072813  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:15.072869  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:15.444876  232267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 21:19:15.553533  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:15.572938  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:15.573066  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:16.052477  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:16.072500  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:16.072553  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:16.552700  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:16.572528  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:16.572706  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1013 21:19:16.960946  232267 node_ready.go:57] node "addons-143775" has "Ready":"False" status (will retry)
	I1013 21:19:17.053198  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:17.072155  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:17.072764  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:17.323474  232267 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.509718378s)
	I1013 21:19:17.323568  232267 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.878642242s)
	W1013 21:19:17.323616  232267 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:19:17.323643  232267 retry.go:31] will retry after 760.118509ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:19:17.552957  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:17.572842  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:17.573040  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:18.052713  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:18.072459  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:18.072666  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:18.084619  232267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 21:19:18.552740  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:18.571796  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:18.572482  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1013 21:19:18.629718  232267 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:19:18.629754  232267 retry.go:31] will retry after 711.667599ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1013 21:19:18.961106  232267 node_ready.go:57] node "addons-143775" has "Ready":"False" status (will retry)
	I1013 21:19:19.052619  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:19.072396  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:19.072537  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:19.341866  232267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 21:19:19.553211  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:19.572543  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:19.572597  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1013 21:19:19.892593  232267 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:19:19.892624  232267 retry.go:31] will retry after 1.664296033s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:19:20.052726  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:20.072308  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:20.072454  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:20.092278  232267 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1013 21:19:20.092347  232267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-143775
	I1013 21:19:20.109838  232267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/addons-143775/id_rsa Username:docker}
	I1013 21:19:20.214851  232267 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1013 21:19:20.227959  232267 addons.go:238] Setting addon gcp-auth=true in "addons-143775"
	I1013 21:19:20.228037  232267 host.go:66] Checking if "addons-143775" exists ...
	I1013 21:19:20.228733  232267 cli_runner.go:164] Run: docker container inspect addons-143775 --format={{.State.Status}}
	I1013 21:19:20.246542  232267 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1013 21:19:20.246636  232267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-143775
	I1013 21:19:20.263783  232267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/addons-143775/id_rsa Username:docker}
	I1013 21:19:20.360131  232267 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1013 21:19:20.361585  232267 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1013 21:19:20.362733  232267 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1013 21:19:20.362751  232267 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1013 21:19:20.376699  232267 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1013 21:19:20.376727  232267 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1013 21:19:20.390084  232267 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1013 21:19:20.390111  232267 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1013 21:19:20.403258  232267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1013 21:19:20.552762  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:20.572564  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:20.572830  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:20.719141  232267 addons.go:479] Verifying addon gcp-auth=true in "addons-143775"
	I1013 21:19:20.720560  232267 out.go:179] * Verifying gcp-auth addon...
	I1013 21:19:20.722367  232267 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1013 21:19:20.724895  232267 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1013 21:19:20.724911  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:21.052694  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:21.072648  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:21.072879  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:21.225583  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1013 21:19:21.460567  232267 node_ready.go:57] node "addons-143775" has "Ready":"False" status (will retry)
	I1013 21:19:21.552680  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:21.557826  232267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 21:19:21.572455  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:21.572596  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:21.725209  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:22.052384  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:22.072217  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:22.072340  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1013 21:19:22.117104  232267 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:19:22.117151  232267 retry.go:31] will retry after 1.694109804s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:19:22.226080  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:22.552821  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:22.572561  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:22.572827  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:22.725585  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:23.052733  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:23.072460  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:23.073034  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:23.225926  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1013 21:19:23.461022  232267 node_ready.go:57] node "addons-143775" has "Ready":"False" status (will retry)
	I1013 21:19:23.553078  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:23.572162  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:23.572552  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:23.725447  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:23.811527  232267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 21:19:24.052884  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:24.073030  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:24.073279  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:24.226466  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1013 21:19:24.360624  232267 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:19:24.360653  232267 retry.go:31] will retry after 3.369253123s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:19:24.552292  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:24.572299  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:24.572409  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:24.726139  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:25.052301  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:25.072145  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:25.072346  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:25.226360  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:25.552389  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:25.572296  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:25.572430  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:25.726234  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1013 21:19:25.960962  232267 node_ready.go:57] node "addons-143775" has "Ready":"False" status (will retry)
	I1013 21:19:26.053371  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:26.072103  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:26.072325  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:26.226147  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:26.552181  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:26.572323  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:26.572561  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:26.725311  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:27.052611  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:27.072288  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:27.072312  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:27.226263  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:27.551921  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:27.572916  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:27.572955  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:27.726099  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:27.731109  232267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 21:19:28.052292  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:28.072317  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:28.072539  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:28.226212  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1013 21:19:28.275259  232267 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:19:28.275304  232267 retry.go:31] will retry after 4.658291441s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1013 21:19:28.461815  232267 node_ready.go:57] node "addons-143775" has "Ready":"False" status (will retry)
	I1013 21:19:28.552606  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:28.572362  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:28.572494  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:28.725301  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:29.052192  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:29.071970  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:29.072503  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:29.225687  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:29.552636  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:29.572326  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:29.572530  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:29.725560  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:30.052769  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:30.072483  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:30.072685  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:30.225266  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:30.552361  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:30.572116  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:30.572265  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:30.726344  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1013 21:19:30.961058  232267 node_ready.go:57] node "addons-143775" has "Ready":"False" status (will retry)
	I1013 21:19:31.052978  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:31.072568  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:31.072822  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:31.225681  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:31.552447  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:31.572394  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:31.572452  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:31.725430  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:32.052772  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:32.072397  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:32.072566  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:32.225454  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:32.552475  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:32.572467  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:32.572596  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:32.725541  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:32.933757  232267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1013 21:19:32.961140  232267 node_ready.go:57] node "addons-143775" has "Ready":"False" status (will retry)
	I1013 21:19:33.052115  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:33.072066  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:33.072549  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:33.225591  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1013 21:19:33.482681  232267 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:19:33.482713  232267 retry.go:31] will retry after 9.570443732s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:19:33.552672  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:33.572306  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:33.572484  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:33.725239  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:34.053148  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:34.072195  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:34.072617  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:34.225531  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:34.552019  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:34.572742  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:34.572794  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:34.725548  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:35.052394  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:35.071815  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:35.072045  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:35.225848  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1013 21:19:35.460645  232267 node_ready.go:57] node "addons-143775" has "Ready":"False" status (will retry)
	I1013 21:19:35.552890  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:35.573844  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:35.574091  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:35.725635  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:36.053173  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:36.071962  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:36.072350  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:36.225287  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:36.552506  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:36.572299  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:36.572435  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:36.726366  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:37.052274  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:37.071877  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:37.072028  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:37.225791  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1013 21:19:37.460971  232267 node_ready.go:57] node "addons-143775" has "Ready":"False" status (will retry)
	I1013 21:19:37.553316  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:37.572217  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:37.572419  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:37.726321  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:38.052575  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:38.072368  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:38.072534  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:38.225387  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:38.552242  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:38.572279  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:38.572330  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:38.726203  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:39.052781  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:39.072442  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:39.072567  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:39.225531  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:39.552179  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:39.572224  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:39.572556  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:39.725553  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1013 21:19:39.960314  232267 node_ready.go:57] node "addons-143775" has "Ready":"False" status (will retry)
	I1013 21:19:40.052148  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:40.072170  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:40.072657  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:40.225378  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:40.552715  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:40.572469  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:40.572739  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:40.725333  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:41.051820  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:41.072752  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:41.072839  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:41.225825  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:41.552821  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:41.575029  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:41.575119  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:41.726216  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1013 21:19:41.961389  232267 node_ready.go:57] node "addons-143775" has "Ready":"False" status (will retry)
	I1013 21:19:42.052399  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:42.072036  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:42.072149  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:42.226147  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:42.551803  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:42.572667  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:42.572840  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:42.725665  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:43.052363  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:43.053420  232267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 21:19:43.072832  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:43.072863  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:43.225671  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:43.552085  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:43.572624  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:43.572820  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1013 21:19:43.602491  232267 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:19:43.602526  232267 retry.go:31] will retry after 6.263252627s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:19:43.725477  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:44.052672  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:44.072436  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:44.072613  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:44.225500  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1013 21:19:44.460302  232267 node_ready.go:57] node "addons-143775" has "Ready":"False" status (will retry)
	I1013 21:19:44.552468  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:44.572450  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:44.572617  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:44.725584  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:45.052039  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:45.072900  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:45.072966  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:45.225818  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:45.552914  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:45.572752  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:45.572939  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:45.726410  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:46.052724  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:46.072774  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:46.072839  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:46.225582  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1013 21:19:46.460524  232267 node_ready.go:57] node "addons-143775" has "Ready":"False" status (will retry)
	I1013 21:19:46.552542  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:46.572519  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:46.572699  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:46.725527  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:47.052582  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:47.072101  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:47.072541  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:47.225624  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:47.552788  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:47.572681  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:47.572859  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:47.726252  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:48.052774  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:48.072460  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:48.072565  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:48.225223  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:48.552218  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:48.572454  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:48.572950  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:48.725873  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1013 21:19:48.961129  232267 node_ready.go:57] node "addons-143775" has "Ready":"False" status (will retry)
	I1013 21:19:49.052940  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:49.072704  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:49.072912  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:49.225716  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:49.552542  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:49.572632  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:49.572720  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:49.725548  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:49.866790  232267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 21:19:50.052859  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:50.073092  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:50.073261  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:50.225986  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1013 21:19:50.417637  232267 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:19:50.417676  232267 retry.go:31] will retry after 15.780847337s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:19:50.552751  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:50.572860  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:50.573111  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:50.726089  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:51.052477  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:51.072242  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:51.072488  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:51.225378  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1013 21:19:51.460354  232267 node_ready.go:57] node "addons-143775" has "Ready":"False" status (will retry)
	I1013 21:19:51.552498  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:51.572431  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:51.572589  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:51.725531  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:52.052406  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:52.072115  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:52.072307  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:52.226074  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:52.552206  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:52.572387  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:52.572731  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:52.725952  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:53.052427  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:53.072373  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:53.072563  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:53.225269  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1013 21:19:53.461219  232267 node_ready.go:57] node "addons-143775" has "Ready":"False" status (will retry)
	I1013 21:19:53.552366  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:53.572250  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:53.572265  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:53.726099  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:54.054112  232267 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1013 21:19:54.054139  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:54.075681  232267 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1013 21:19:54.075708  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:54.075745  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:54.225633  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:54.461676  232267 node_ready.go:49] node "addons-143775" is "Ready"
	I1013 21:19:54.461718  232267 node_ready.go:38] duration metric: took 41.504300426s for node "addons-143775" to be "Ready" ...
	I1013 21:19:54.461738  232267 api_server.go:52] waiting for apiserver process to appear ...
	I1013 21:19:54.461794  232267 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 21:19:54.488405  232267 api_server.go:72] duration metric: took 42.100831976s to wait for apiserver process to appear ...
	I1013 21:19:54.488436  232267 api_server.go:88] waiting for apiserver healthz status ...
	I1013 21:19:54.488462  232267 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1013 21:19:54.493727  232267 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1013 21:19:54.494880  232267 api_server.go:141] control plane version: v1.34.1
	I1013 21:19:54.494914  232267 api_server.go:131] duration metric: took 6.469739ms to wait for apiserver health ...
	I1013 21:19:54.494927  232267 system_pods.go:43] waiting for kube-system pods to appear ...
	I1013 21:19:54.499750  232267 system_pods.go:59] 20 kube-system pods found
	I1013 21:19:54.499800  232267 system_pods.go:61] "amd-gpu-device-plugin-ppkwz" [7266410e-a8ea-4a69-8452-d90353368f92] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1013 21:19:54.499812  232267 system_pods.go:61] "coredns-66bc5c9577-hrwcq" [25a3dd55-7f83-415b-883a-46d48cf47a9c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 21:19:54.499831  232267 system_pods.go:61] "csi-hostpath-attacher-0" [2c4ef937-534b-4fd4-951d-2703e4e2786e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1013 21:19:54.499840  232267 system_pods.go:61] "csi-hostpath-resizer-0" [0d58f1c4-8cf5-44e2-9ebb-84453ddf9e1d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1013 21:19:54.499881  232267 system_pods.go:61] "csi-hostpathplugin-74gj5" [b0f7623d-c8bb-49e5-bbee-49d50b562724] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1013 21:19:54.499896  232267 system_pods.go:61] "etcd-addons-143775" [a29bb28e-fc01-422c-88d4-8a069ab9d9be] Running
	I1013 21:19:54.499902  232267 system_pods.go:61] "kindnet-gxtvs" [0b8a4ec7-d20b-49ab-b757-1c532a3b04b6] Running
	I1013 21:19:54.499908  232267 system_pods.go:61] "kube-apiserver-addons-143775" [7701b603-3704-401f-98ec-746b84d0cbbf] Running
	I1013 21:19:54.499913  232267 system_pods.go:61] "kube-controller-manager-addons-143775" [78f6b439-7ab5-4af7-8223-92ea1d5429ea] Running
	I1013 21:19:54.499922  232267 system_pods.go:61] "kube-ingress-dns-minikube" [9cf35d01-b1fa-44a9-9bc8-5ad60442d705] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1013 21:19:54.499928  232267 system_pods.go:61] "kube-proxy-m55cq" [208146d5-8de3-4b99-89b8-5976fed1698a] Running
	I1013 21:19:54.499935  232267 system_pods.go:61] "kube-scheduler-addons-143775" [43c7c683-19ef-4140-80b7-7178150968ba] Running
	I1013 21:19:54.499943  232267 system_pods.go:61] "metrics-server-85b7d694d7-vdzpz" [cbad5626-3368-443c-8b1f-db21133a333c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1013 21:19:54.499953  232267 system_pods.go:61] "nvidia-device-plugin-daemonset-dncl2" [20aff2ff-0ccf-43d1-b425-3353c5b46b49] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1013 21:19:54.499962  232267 system_pods.go:61] "registry-6b586f9694-h4pdt" [db159b01-5db6-4300-85e5-55d60d08480c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1013 21:19:54.499976  232267 system_pods.go:61] "registry-creds-764b6fb674-skkk5" [a746932d-4fa8-46a2-96bc-caf52484966b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1013 21:19:54.499985  232267 system_pods.go:61] "registry-proxy-rrhdd" [0cf00a49-8dae-4bc0-9c48-21b177af9830] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1013 21:19:54.500020  232267 system_pods.go:61] "snapshot-controller-7d9fbc56b8-kkj6s" [9173a351-657d-4cb7-877d-b296af6af1b7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1013 21:19:54.500030  232267 system_pods.go:61] "snapshot-controller-7d9fbc56b8-zv74f" [b42d7359-8e90-4235-93a0-3b7f08e15fb7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1013 21:19:54.500038  232267 system_pods.go:61] "storage-provisioner" [c8665e3d-cb2f-41f7-8478-0156acdcc178] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 21:19:54.500054  232267 system_pods.go:74] duration metric: took 5.115632ms to wait for pod list to return data ...
	I1013 21:19:54.500077  232267 default_sa.go:34] waiting for default service account to be created ...
	I1013 21:19:54.502920  232267 default_sa.go:45] found service account: "default"
	I1013 21:19:54.502950  232267 default_sa.go:55] duration metric: took 2.861101ms for default service account to be created ...
	I1013 21:19:54.502966  232267 system_pods.go:116] waiting for k8s-apps to be running ...
	I1013 21:19:54.599561  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:54.599595  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:54.599708  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:54.600730  232267 system_pods.go:86] 20 kube-system pods found
	I1013 21:19:54.600773  232267 system_pods.go:89] "amd-gpu-device-plugin-ppkwz" [7266410e-a8ea-4a69-8452-d90353368f92] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1013 21:19:54.600785  232267 system_pods.go:89] "coredns-66bc5c9577-hrwcq" [25a3dd55-7f83-415b-883a-46d48cf47a9c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 21:19:54.600794  232267 system_pods.go:89] "csi-hostpath-attacher-0" [2c4ef937-534b-4fd4-951d-2703e4e2786e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1013 21:19:54.600800  232267 system_pods.go:89] "csi-hostpath-resizer-0" [0d58f1c4-8cf5-44e2-9ebb-84453ddf9e1d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1013 21:19:54.600806  232267 system_pods.go:89] "csi-hostpathplugin-74gj5" [b0f7623d-c8bb-49e5-bbee-49d50b562724] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1013 21:19:54.600810  232267 system_pods.go:89] "etcd-addons-143775" [a29bb28e-fc01-422c-88d4-8a069ab9d9be] Running
	I1013 21:19:54.600815  232267 system_pods.go:89] "kindnet-gxtvs" [0b8a4ec7-d20b-49ab-b757-1c532a3b04b6] Running
	I1013 21:19:54.600822  232267 system_pods.go:89] "kube-apiserver-addons-143775" [7701b603-3704-401f-98ec-746b84d0cbbf] Running
	I1013 21:19:54.600826  232267 system_pods.go:89] "kube-controller-manager-addons-143775" [78f6b439-7ab5-4af7-8223-92ea1d5429ea] Running
	I1013 21:19:54.600831  232267 system_pods.go:89] "kube-ingress-dns-minikube" [9cf35d01-b1fa-44a9-9bc8-5ad60442d705] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1013 21:19:54.600834  232267 system_pods.go:89] "kube-proxy-m55cq" [208146d5-8de3-4b99-89b8-5976fed1698a] Running
	I1013 21:19:54.600838  232267 system_pods.go:89] "kube-scheduler-addons-143775" [43c7c683-19ef-4140-80b7-7178150968ba] Running
	I1013 21:19:54.600844  232267 system_pods.go:89] "metrics-server-85b7d694d7-vdzpz" [cbad5626-3368-443c-8b1f-db21133a333c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1013 21:19:54.600853  232267 system_pods.go:89] "nvidia-device-plugin-daemonset-dncl2" [20aff2ff-0ccf-43d1-b425-3353c5b46b49] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1013 21:19:54.600859  232267 system_pods.go:89] "registry-6b586f9694-h4pdt" [db159b01-5db6-4300-85e5-55d60d08480c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1013 21:19:54.600866  232267 system_pods.go:89] "registry-creds-764b6fb674-skkk5" [a746932d-4fa8-46a2-96bc-caf52484966b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1013 21:19:54.600872  232267 system_pods.go:89] "registry-proxy-rrhdd" [0cf00a49-8dae-4bc0-9c48-21b177af9830] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1013 21:19:54.600878  232267 system_pods.go:89] "snapshot-controller-7d9fbc56b8-kkj6s" [9173a351-657d-4cb7-877d-b296af6af1b7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1013 21:19:54.600885  232267 system_pods.go:89] "snapshot-controller-7d9fbc56b8-zv74f" [b42d7359-8e90-4235-93a0-3b7f08e15fb7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1013 21:19:54.600892  232267 system_pods.go:89] "storage-provisioner" [c8665e3d-cb2f-41f7-8478-0156acdcc178] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 21:19:54.600909  232267 retry.go:31] will retry after 216.411369ms: missing components: kube-dns
	I1013 21:19:54.726004  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:54.823174  232267 system_pods.go:86] 20 kube-system pods found
	I1013 21:19:54.823215  232267 system_pods.go:89] "amd-gpu-device-plugin-ppkwz" [7266410e-a8ea-4a69-8452-d90353368f92] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1013 21:19:54.823226  232267 system_pods.go:89] "coredns-66bc5c9577-hrwcq" [25a3dd55-7f83-415b-883a-46d48cf47a9c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 21:19:54.823237  232267 system_pods.go:89] "csi-hostpath-attacher-0" [2c4ef937-534b-4fd4-951d-2703e4e2786e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1013 21:19:54.823246  232267 system_pods.go:89] "csi-hostpath-resizer-0" [0d58f1c4-8cf5-44e2-9ebb-84453ddf9e1d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1013 21:19:54.823255  232267 system_pods.go:89] "csi-hostpathplugin-74gj5" [b0f7623d-c8bb-49e5-bbee-49d50b562724] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1013 21:19:54.823261  232267 system_pods.go:89] "etcd-addons-143775" [a29bb28e-fc01-422c-88d4-8a069ab9d9be] Running
	I1013 21:19:54.823267  232267 system_pods.go:89] "kindnet-gxtvs" [0b8a4ec7-d20b-49ab-b757-1c532a3b04b6] Running
	I1013 21:19:54.823273  232267 system_pods.go:89] "kube-apiserver-addons-143775" [7701b603-3704-401f-98ec-746b84d0cbbf] Running
	I1013 21:19:54.823279  232267 system_pods.go:89] "kube-controller-manager-addons-143775" [78f6b439-7ab5-4af7-8223-92ea1d5429ea] Running
	I1013 21:19:54.823287  232267 system_pods.go:89] "kube-ingress-dns-minikube" [9cf35d01-b1fa-44a9-9bc8-5ad60442d705] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1013 21:19:54.823293  232267 system_pods.go:89] "kube-proxy-m55cq" [208146d5-8de3-4b99-89b8-5976fed1698a] Running
	I1013 21:19:54.823299  232267 system_pods.go:89] "kube-scheduler-addons-143775" [43c7c683-19ef-4140-80b7-7178150968ba] Running
	I1013 21:19:54.823306  232267 system_pods.go:89] "metrics-server-85b7d694d7-vdzpz" [cbad5626-3368-443c-8b1f-db21133a333c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1013 21:19:54.823315  232267 system_pods.go:89] "nvidia-device-plugin-daemonset-dncl2" [20aff2ff-0ccf-43d1-b425-3353c5b46b49] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1013 21:19:54.823323  232267 system_pods.go:89] "registry-6b586f9694-h4pdt" [db159b01-5db6-4300-85e5-55d60d08480c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1013 21:19:54.823330  232267 system_pods.go:89] "registry-creds-764b6fb674-skkk5" [a746932d-4fa8-46a2-96bc-caf52484966b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1013 21:19:54.823338  232267 system_pods.go:89] "registry-proxy-rrhdd" [0cf00a49-8dae-4bc0-9c48-21b177af9830] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1013 21:19:54.823352  232267 system_pods.go:89] "snapshot-controller-7d9fbc56b8-kkj6s" [9173a351-657d-4cb7-877d-b296af6af1b7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1013 21:19:54.823365  232267 system_pods.go:89] "snapshot-controller-7d9fbc56b8-zv74f" [b42d7359-8e90-4235-93a0-3b7f08e15fb7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1013 21:19:54.823376  232267 system_pods.go:89] "storage-provisioner" [c8665e3d-cb2f-41f7-8478-0156acdcc178] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 21:19:54.823399  232267 retry.go:31] will retry after 251.942092ms: missing components: kube-dns
	I1013 21:19:55.053202  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:55.072261  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:55.072818  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:55.079834  232267 system_pods.go:86] 20 kube-system pods found
	I1013 21:19:55.079873  232267 system_pods.go:89] "amd-gpu-device-plugin-ppkwz" [7266410e-a8ea-4a69-8452-d90353368f92] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1013 21:19:55.079882  232267 system_pods.go:89] "coredns-66bc5c9577-hrwcq" [25a3dd55-7f83-415b-883a-46d48cf47a9c] Running
	I1013 21:19:55.079895  232267 system_pods.go:89] "csi-hostpath-attacher-0" [2c4ef937-534b-4fd4-951d-2703e4e2786e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1013 21:19:55.079906  232267 system_pods.go:89] "csi-hostpath-resizer-0" [0d58f1c4-8cf5-44e2-9ebb-84453ddf9e1d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1013 21:19:55.079916  232267 system_pods.go:89] "csi-hostpathplugin-74gj5" [b0f7623d-c8bb-49e5-bbee-49d50b562724] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1013 21:19:55.079924  232267 system_pods.go:89] "etcd-addons-143775" [a29bb28e-fc01-422c-88d4-8a069ab9d9be] Running
	I1013 21:19:55.079931  232267 system_pods.go:89] "kindnet-gxtvs" [0b8a4ec7-d20b-49ab-b757-1c532a3b04b6] Running
	I1013 21:19:55.079939  232267 system_pods.go:89] "kube-apiserver-addons-143775" [7701b603-3704-401f-98ec-746b84d0cbbf] Running
	I1013 21:19:55.079946  232267 system_pods.go:89] "kube-controller-manager-addons-143775" [78f6b439-7ab5-4af7-8223-92ea1d5429ea] Running
	I1013 21:19:55.079958  232267 system_pods.go:89] "kube-ingress-dns-minikube" [9cf35d01-b1fa-44a9-9bc8-5ad60442d705] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1013 21:19:55.079968  232267 system_pods.go:89] "kube-proxy-m55cq" [208146d5-8de3-4b99-89b8-5976fed1698a] Running
	I1013 21:19:55.079975  232267 system_pods.go:89] "kube-scheduler-addons-143775" [43c7c683-19ef-4140-80b7-7178150968ba] Running
	I1013 21:19:55.079986  232267 system_pods.go:89] "metrics-server-85b7d694d7-vdzpz" [cbad5626-3368-443c-8b1f-db21133a333c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1013 21:19:55.080015  232267 system_pods.go:89] "nvidia-device-plugin-daemonset-dncl2" [20aff2ff-0ccf-43d1-b425-3353c5b46b49] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1013 21:19:55.080028  232267 system_pods.go:89] "registry-6b586f9694-h4pdt" [db159b01-5db6-4300-85e5-55d60d08480c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1013 21:19:55.080036  232267 system_pods.go:89] "registry-creds-764b6fb674-skkk5" [a746932d-4fa8-46a2-96bc-caf52484966b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1013 21:19:55.080046  232267 system_pods.go:89] "registry-proxy-rrhdd" [0cf00a49-8dae-4bc0-9c48-21b177af9830] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1013 21:19:55.080057  232267 system_pods.go:89] "snapshot-controller-7d9fbc56b8-kkj6s" [9173a351-657d-4cb7-877d-b296af6af1b7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1013 21:19:55.080065  232267 system_pods.go:89] "snapshot-controller-7d9fbc56b8-zv74f" [b42d7359-8e90-4235-93a0-3b7f08e15fb7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1013 21:19:55.080080  232267 system_pods.go:89] "storage-provisioner" [c8665e3d-cb2f-41f7-8478-0156acdcc178] Running
	I1013 21:19:55.080096  232267 system_pods.go:126] duration metric: took 577.123161ms to wait for k8s-apps to be running ...
	I1013 21:19:55.080108  232267 system_svc.go:44] waiting for kubelet service to be running ....
	I1013 21:19:55.080165  232267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 21:19:55.099386  232267 system_svc.go:56] duration metric: took 19.266854ms WaitForService to wait for kubelet
	I1013 21:19:55.099423  232267 kubeadm.go:586] duration metric: took 42.711856848s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 21:19:55.099453  232267 node_conditions.go:102] verifying NodePressure condition ...
	I1013 21:19:55.103285  232267 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1013 21:19:55.103321  232267 node_conditions.go:123] node cpu capacity is 8
	I1013 21:19:55.103356  232267 node_conditions.go:105] duration metric: took 3.896171ms to run NodePressure ...
	I1013 21:19:55.103372  232267 start.go:241] waiting for startup goroutines ...
	I1013 21:19:55.226271  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:55.553436  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:55.572447  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:55.572758  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:55.726025  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:56.053078  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:56.073254  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:56.073301  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:56.226321  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:56.552981  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:56.573300  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:56.573306  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:56.726183  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:57.053036  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:57.072597  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:57.072788  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:57.225502  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:57.553035  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:57.573096  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:57.573223  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:57.726378  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:58.054845  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:58.073421  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:58.073451  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:58.226834  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:58.553689  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:58.572945  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:58.573092  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:58.726341  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:59.053610  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:59.072685  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:59.072701  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:59.226136  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:59.552434  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:59.572447  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:59.572520  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:59.725292  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:00.052960  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:00.073055  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:00.073176  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:00.226118  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:00.552477  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:00.573665  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:00.573842  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:00.725836  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:01.052778  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:01.073158  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:01.073200  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:01.226592  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:01.553493  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:01.572593  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:01.572702  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:01.725808  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:02.052437  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:02.072522  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:02.152752  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:02.225725  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:02.552607  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:02.573796  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:02.573827  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:02.726494  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:03.053712  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:03.073103  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:03.073200  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:03.226699  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:03.553873  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:03.573047  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:03.573096  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:03.726111  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:04.052414  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:04.072442  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:04.072532  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:04.226020  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:04.551917  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:04.573359  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:04.573489  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:04.819561  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:05.053259  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:05.072483  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:05.072724  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:05.225403  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:05.552802  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:05.572922  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:05.572922  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:05.726607  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:06.053144  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:06.072775  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:06.072868  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:06.198888  232267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 21:20:06.227127  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:06.552732  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:06.573152  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:06.573166  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:06.726538  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1013 21:20:06.842531  232267 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:20:06.842573  232267 retry.go:31] will retry after 30.83180354s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:20:07.053222  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:07.072223  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:07.072383  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:07.224959  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:07.552523  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:07.572335  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:07.572403  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:07.726272  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:08.073084  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:08.073096  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:08.073214  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:08.226351  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:08.552836  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:08.573100  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:08.573100  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:08.727102  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:09.052838  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:09.072910  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:09.073023  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:09.226464  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:09.553457  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:09.572447  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:09.572544  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:09.725705  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:10.098200  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:10.098307  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:10.098385  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:10.226218  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:10.553027  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:10.573316  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:10.573737  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:10.726454  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:11.053268  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:11.072447  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:11.072552  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:11.225751  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:11.552955  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:11.573121  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:11.573227  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:11.726432  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:12.052960  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:12.072636  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:12.072700  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:12.226285  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:12.553530  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:12.572760  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:12.572967  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:12.726613  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:13.053145  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:13.154361  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:13.154416  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:13.254763  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:13.553264  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:13.572363  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:13.572668  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:13.725930  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:14.053087  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:14.073203  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:14.073498  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:14.226035  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:14.552956  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:14.573027  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:14.573128  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:14.726098  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:15.052219  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:15.072706  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:15.072731  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:15.225808  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:15.553451  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:15.572633  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:15.572714  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:15.725680  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:16.110172  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:16.110934  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:16.111247  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:16.334850  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:16.553905  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:16.573829  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:16.574135  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:16.727444  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:17.056809  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:17.077328  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:17.077931  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:17.227729  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:17.552965  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:17.574429  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:17.574442  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:17.726094  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:18.052719  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:18.073307  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:18.073751  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:18.227388  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:18.553535  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:18.572635  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:18.573011  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:18.726985  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:19.053230  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:19.073500  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:19.073554  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:19.225478  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:19.552878  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:19.573453  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:19.573497  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:19.725504  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:20.053544  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:20.072492  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:20.072690  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:20.225979  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:20.552640  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:20.572912  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:20.573072  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:20.726068  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:21.052964  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:21.073191  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:21.073350  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:21.226717  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:21.552614  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:21.573208  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:21.573400  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:21.726432  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:22.053674  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:22.074070  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:22.075892  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:22.225823  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:22.552416  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:22.572488  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:22.572488  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:22.725630  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:23.052917  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:23.072591  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:23.072661  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:23.225904  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:23.553408  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:23.573971  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:23.574033  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:23.726084  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:24.052804  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:24.072450  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:24.072791  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:24.225714  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:24.554134  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:24.575701  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:20:24.575919  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:24.727061  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:25.058795  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:25.267678  232267 kapi.go:107] duration metric: took 1m11.198148764s to wait for kubernetes.io/minikube-addons=registry ...
	I1013 21:20:25.267810  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:25.268452  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:25.552838  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:25.572718  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:25.725664  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:26.053578  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:26.072904  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:26.226170  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:26.553708  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:26.572832  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:26.726087  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:27.052517  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:27.072556  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:27.226749  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:27.553714  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:27.572955  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:27.725582  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:28.053850  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:28.073623  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:28.226065  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:28.553218  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:28.573947  232267 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:28.726274  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:29.053293  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:29.153666  232267 kapi.go:107] duration metric: took 1m15.084743515s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1013 21:20:29.225557  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:29.552625  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:29.725863  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:30.053011  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:30.227711  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:30.553588  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:30.725907  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:31.087781  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:31.225439  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:31.553117  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:31.726603  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:32.052687  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:32.226603  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:32.553080  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:32.726220  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:33.052887  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:33.226937  232267 kapi.go:107] duration metric: took 1m12.504562068s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1013 21:20:33.229120  232267 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-143775 cluster.
	I1013 21:20:33.231018  232267 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1013 21:20:33.232908  232267 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1013 21:20:33.552716  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:34.052908  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:34.552583  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:35.053785  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:35.553715  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:36.053298  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:36.552534  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:37.053017  232267 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:37.552545  232267 kapi.go:107] duration metric: took 1m23.003577776s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1013 21:20:37.674622  232267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1013 21:20:38.217361  232267 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:20:38.217397  232267 retry.go:31] will retry after 21.710673613s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:20:59.929058  232267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1013 21:21:00.466508  232267 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1013 21:21:00.466651  232267 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1013 21:21:00.468960  232267 out.go:179] * Enabled addons: amd-gpu-device-plugin, ingress-dns, registry-creds, storage-provisioner, default-storageclass, nvidia-device-plugin, cloud-spanner, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1013 21:21:00.470144  232267 addons.go:514] duration metric: took 1m48.082551271s for enable addons: enabled=[amd-gpu-device-plugin ingress-dns registry-creds storage-provisioner default-storageclass nvidia-device-plugin cloud-spanner metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1013 21:21:00.470188  232267 start.go:246] waiting for cluster config update ...
	I1013 21:21:00.470214  232267 start.go:255] writing updated cluster config ...
	I1013 21:21:00.470510  232267 ssh_runner.go:195] Run: rm -f paused
	I1013 21:21:00.474646  232267 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 21:21:00.478738  232267 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-hrwcq" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 21:21:00.483233  232267 pod_ready.go:94] pod "coredns-66bc5c9577-hrwcq" is "Ready"
	I1013 21:21:00.483259  232267 pod_ready.go:86] duration metric: took 4.496946ms for pod "coredns-66bc5c9577-hrwcq" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 21:21:00.485451  232267 pod_ready.go:83] waiting for pod "etcd-addons-143775" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 21:21:00.489276  232267 pod_ready.go:94] pod "etcd-addons-143775" is "Ready"
	I1013 21:21:00.489299  232267 pod_ready.go:86] duration metric: took 3.830168ms for pod "etcd-addons-143775" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 21:21:00.491222  232267 pod_ready.go:83] waiting for pod "kube-apiserver-addons-143775" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 21:21:00.494683  232267 pod_ready.go:94] pod "kube-apiserver-addons-143775" is "Ready"
	I1013 21:21:00.494702  232267 pod_ready.go:86] duration metric: took 3.461071ms for pod "kube-apiserver-addons-143775" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 21:21:00.496584  232267 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-143775" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 21:21:00.879089  232267 pod_ready.go:94] pod "kube-controller-manager-addons-143775" is "Ready"
	I1013 21:21:00.879123  232267 pod_ready.go:86] duration metric: took 382.522118ms for pod "kube-controller-manager-addons-143775" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 21:21:01.078960  232267 pod_ready.go:83] waiting for pod "kube-proxy-m55cq" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 21:21:01.478369  232267 pod_ready.go:94] pod "kube-proxy-m55cq" is "Ready"
	I1013 21:21:01.478397  232267 pod_ready.go:86] duration metric: took 399.409914ms for pod "kube-proxy-m55cq" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 21:21:01.678701  232267 pod_ready.go:83] waiting for pod "kube-scheduler-addons-143775" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 21:21:02.079364  232267 pod_ready.go:94] pod "kube-scheduler-addons-143775" is "Ready"
	I1013 21:21:02.079399  232267 pod_ready.go:86] duration metric: took 400.668182ms for pod "kube-scheduler-addons-143775" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 21:21:02.079411  232267 pod_ready.go:40] duration metric: took 1.60473781s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 21:21:02.125252  232267 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1013 21:21:02.127556  232267 out.go:179] * Done! kubectl is now configured to use "addons-143775" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 13 21:21:03 addons-143775 crio[778]: time="2025-10-13T21:21:03.094218829Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 13 21:21:03 addons-143775 crio[778]: time="2025-10-13T21:21:03.804474255Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=9a9cf2c9-7f46-4951-81a5-13f0d9f88183 name=/runtime.v1.ImageService/PullImage
	Oct 13 21:21:03 addons-143775 crio[778]: time="2025-10-13T21:21:03.805131997Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=83c067a6-31a7-464e-a07a-ed28238ad371 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 21:21:03 addons-143775 crio[778]: time="2025-10-13T21:21:03.806534072Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b82592c1-5c49-448d-bc5d-a101dd53f6ee name=/runtime.v1.ImageService/ImageStatus
	Oct 13 21:21:03 addons-143775 crio[778]: time="2025-10-13T21:21:03.810254853Z" level=info msg="Creating container: default/busybox/busybox" id=17464e50-0237-4d5a-b1f3-10edc988c53b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 21:21:03 addons-143775 crio[778]: time="2025-10-13T21:21:03.810899005Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 21:21:03 addons-143775 crio[778]: time="2025-10-13T21:21:03.816296383Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 21:21:03 addons-143775 crio[778]: time="2025-10-13T21:21:03.81674707Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 21:21:03 addons-143775 crio[778]: time="2025-10-13T21:21:03.845250516Z" level=info msg="Created container d75436c0ee2da6ea711c2035f2913568721ae807c343d6f56422b17296c7e96c: default/busybox/busybox" id=17464e50-0237-4d5a-b1f3-10edc988c53b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 21:21:03 addons-143775 crio[778]: time="2025-10-13T21:21:03.845860247Z" level=info msg="Starting container: d75436c0ee2da6ea711c2035f2913568721ae807c343d6f56422b17296c7e96c" id=c2390d96-901d-4cd2-bde7-f8836c6d5fb4 name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 21:21:03 addons-143775 crio[778]: time="2025-10-13T21:21:03.847630206Z" level=info msg="Started container" PID=6493 containerID=d75436c0ee2da6ea711c2035f2913568721ae807c343d6f56422b17296c7e96c description=default/busybox/busybox id=c2390d96-901d-4cd2-bde7-f8836c6d5fb4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=dc9c8bb6fdec2dc99809fde6823253d1cceb2757097b117ea0402a127041a260
	Oct 13 21:21:06 addons-143775 crio[778]: time="2025-10-13T21:21:06.734490297Z" level=info msg="Removing container: b5bd98cf30e8a65d0abdddb830a87022ebbceea35febd16106f62d402bdc9578" id=1b916c4d-40d1-4a0f-8a67-e9287cc65ab2 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 13 21:21:06 addons-143775 crio[778]: time="2025-10-13T21:21:06.741832883Z" level=info msg="Removed container b5bd98cf30e8a65d0abdddb830a87022ebbceea35febd16106f62d402bdc9578: gcp-auth/gcp-auth-certs-patch-xc2z4/patch" id=1b916c4d-40d1-4a0f-8a67-e9287cc65ab2 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 13 21:21:06 addons-143775 crio[778]: time="2025-10-13T21:21:06.743549457Z" level=info msg="Removing container: 163da1840ce71d6675d25dfb4a95eb6d8bddaeb14fa335ae1836aaba52e75cf1" id=a3d5ce4e-1588-4406-b12f-2d9ceebf626a name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 13 21:21:06 addons-143775 crio[778]: time="2025-10-13T21:21:06.750056927Z" level=info msg="Removed container 163da1840ce71d6675d25dfb4a95eb6d8bddaeb14fa335ae1836aaba52e75cf1: gcp-auth/gcp-auth-certs-create-24zqp/create" id=a3d5ce4e-1588-4406-b12f-2d9ceebf626a name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 13 21:21:06 addons-143775 crio[778]: time="2025-10-13T21:21:06.752675994Z" level=info msg="Stopping pod sandbox: e299c5e9a85b2212f221d2d9e72b3f3922e4b56c504a51a6159402487d07a3f4" id=71a171e3-f808-4f96-a988-bac4ef3e9de9 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 13 21:21:06 addons-143775 crio[778]: time="2025-10-13T21:21:06.75271741Z" level=info msg="Stopped pod sandbox (already stopped): e299c5e9a85b2212f221d2d9e72b3f3922e4b56c504a51a6159402487d07a3f4" id=71a171e3-f808-4f96-a988-bac4ef3e9de9 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 13 21:21:06 addons-143775 crio[778]: time="2025-10-13T21:21:06.753198687Z" level=info msg="Removing pod sandbox: e299c5e9a85b2212f221d2d9e72b3f3922e4b56c504a51a6159402487d07a3f4" id=5164a4d2-67ab-4bd2-a91c-b4aa38b859f1 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 13 21:21:06 addons-143775 crio[778]: time="2025-10-13T21:21:06.755790547Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 13 21:21:06 addons-143775 crio[778]: time="2025-10-13T21:21:06.755849324Z" level=info msg="Removed pod sandbox: e299c5e9a85b2212f221d2d9e72b3f3922e4b56c504a51a6159402487d07a3f4" id=5164a4d2-67ab-4bd2-a91c-b4aa38b859f1 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 13 21:21:06 addons-143775 crio[778]: time="2025-10-13T21:21:06.75638336Z" level=info msg="Stopping pod sandbox: 86d98d2d590825c4ed8566937feadf1911723f9c936826b747f39b94e1ef9eb7" id=7fca1f20-2b27-493f-8cbe-237949c45719 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 13 21:21:06 addons-143775 crio[778]: time="2025-10-13T21:21:06.756435046Z" level=info msg="Stopped pod sandbox (already stopped): 86d98d2d590825c4ed8566937feadf1911723f9c936826b747f39b94e1ef9eb7" id=7fca1f20-2b27-493f-8cbe-237949c45719 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 13 21:21:06 addons-143775 crio[778]: time="2025-10-13T21:21:06.756714243Z" level=info msg="Removing pod sandbox: 86d98d2d590825c4ed8566937feadf1911723f9c936826b747f39b94e1ef9eb7" id=13630825-7d07-4a3d-98f6-439fdfa052ba name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 13 21:21:06 addons-143775 crio[778]: time="2025-10-13T21:21:06.759410766Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 13 21:21:06 addons-143775 crio[778]: time="2025-10-13T21:21:06.759474619Z" level=info msg="Removed pod sandbox: 86d98d2d590825c4ed8566937feadf1911723f9c936826b747f39b94e1ef9eb7" id=13630825-7d07-4a3d-98f6-439fdfa052ba name=/runtime.v1.RuntimeService/RemovePodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	d75436c0ee2da       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          7 seconds ago        Running             busybox                                  0                   dc9c8bb6fdec2       busybox                                     default
	33180043b49d2       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          35 seconds ago       Running             csi-snapshotter                          0                   24ea6c7f92445       csi-hostpathplugin-74gj5                    kube-system
	29890b5558c66       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          36 seconds ago       Running             csi-provisioner                          0                   24ea6c7f92445       csi-hostpathplugin-74gj5                    kube-system
	0f56c52e6564a       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            37 seconds ago       Running             liveness-probe                           0                   24ea6c7f92445       csi-hostpathplugin-74gj5                    kube-system
	dfd7f05ad90ea       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           38 seconds ago       Running             hostpath                                 0                   24ea6c7f92445       csi-hostpathplugin-74gj5                    kube-system
	1ca16a8f31ca6       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 38 seconds ago       Running             gcp-auth                                 0                   6365076ed95a3       gcp-auth-78565c9fb4-drvz6                   gcp-auth
	9da9822bfa300       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb                            39 seconds ago       Running             gadget                                   0                   f16e164ebc52d       gadget-lkcrw                                gadget
	d3f41f21c86bd       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                42 seconds ago       Running             node-driver-registrar                    0                   24ea6c7f92445       csi-hostpathplugin-74gj5                    kube-system
	5621e395830f5       registry.k8s.io/ingress-nginx/controller@sha256:7b4073fc95e078d863c0b0b08deb72e01d2cf629e2156822bcd394fc2bcd8e83                             43 seconds ago       Running             controller                               0                   57b69f8a7447e       ingress-nginx-controller-675c5ddd98-cvxfz   ingress-nginx
	178e4409ca2b6       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              47 seconds ago       Running             registry-proxy                           0                   f8214b125d132       registry-proxy-rrhdd                        kube-system
	8d550cc3998c8       nvcr.io/nvidia/k8s-device-plugin@sha256:ad155f1089b64673c75b2f39258f0791cbad6d3011419726ec605196981e1c32                                     48 seconds ago       Running             nvidia-device-plugin-ctr                 0                   60add3cf45101       nvidia-device-plugin-daemonset-dncl2        kube-system
	57bd7bb06e366       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     56 seconds ago       Running             amd-gpu-device-plugin                    0                   327972e03f278       amd-gpu-device-plugin-ppkwz                 kube-system
	03f55a19579f6       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   57 seconds ago       Running             csi-external-health-monitor-controller   0                   24ea6c7f92445       csi-hostpathplugin-74gj5                    kube-system
	37d832fcb8c1f       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      58 seconds ago       Running             volume-snapshot-controller               0                   2c16479167a4f       snapshot-controller-7d9fbc56b8-kkj6s        kube-system
	0316d05383999       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      58 seconds ago       Running             volume-snapshot-controller               0                   349a43c34d66a       snapshot-controller-7d9fbc56b8-zv74f        kube-system
	c42f211cc6800       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   59 seconds ago       Exited              patch                                    0                   fcf974a645f96       ingress-nginx-admission-patch-nrsqr         ingress-nginx
	630a251fc66ba       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             59 seconds ago       Running             csi-attacher                             0                   2721c99f66266       csi-hostpath-attacher-0                     kube-system
	03c7460cdbd20       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        About a minute ago   Running             metrics-server                           0                   8da65f70d0dee       metrics-server-85b7d694d7-vdzpz             kube-system
	0e9754c3036df       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              About a minute ago   Running             csi-resizer                              0                   de6d113547ec4       csi-hostpath-resizer-0                      kube-system
	4270a9ae8a25b       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   About a minute ago   Exited              create                                   0                   5c7a95ca85d43       ingress-nginx-admission-create-jm9d9        ingress-nginx
	fd4aee1022dce       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             About a minute ago   Running             local-path-provisioner                   0                   627a6eee50231       local-path-provisioner-648f6765c9-6dwg5     local-path-storage
	e57df483a324f       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           About a minute ago   Running             registry                                 0                   e3ab846f99e32       registry-6b586f9694-h4pdt                   kube-system
	a5b743f1ce5c1       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              About a minute ago   Running             yakd                                     0                   bf88fe493eabb       yakd-dashboard-5ff678cb9-j4nvc              yakd-dashboard
	a21bb2b294cea       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               About a minute ago   Running             minikube-ingress-dns                     0                   f530edb9dc256       kube-ingress-dns-minikube                   kube-system
	b7da2064722f5       gcr.io/cloud-spanner-emulator/emulator@sha256:66030f526b1bc41f0d2027b496fd8fa53f620bf9d5a18baa07990e67f1a20237                               About a minute ago   Running             cloud-spanner-emulator                   0                   473c9fb727e90       cloud-spanner-emulator-86bd5cbb97-tr882     default
	278b4b7546c8c       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             About a minute ago   Running             coredns                                  0                   cf835b56a046c       coredns-66bc5c9577-hrwcq                    kube-system
	e208e9862015d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             About a minute ago   Running             storage-provisioner                      0                   bac617f9b937f       storage-provisioner                         kube-system
	4a3e089044a38       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             About a minute ago   Running             kindnet-cni                              0                   1684b1b800d4a       kindnet-gxtvs                               kube-system
	ac355aa00aaae       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             About a minute ago   Running             kube-proxy                               0                   040a74504a80b       kube-proxy-m55cq                            kube-system
	fc72bcf650d5a       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             2 minutes ago        Running             kube-apiserver                           0                   ced9bc5edb2b1       kube-apiserver-addons-143775                kube-system
	4f9c304b23eab       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             2 minutes ago        Running             etcd                                     0                   7f6253c4294cd       etcd-addons-143775                          kube-system
	c0af2973488b6       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             2 minutes ago        Running             kube-scheduler                           0                   79036bd56c3eb       kube-scheduler-addons-143775                kube-system
	6cbf217264895       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             2 minutes ago        Running             kube-controller-manager                  0                   ad6e3df91e539       kube-controller-manager-addons-143775       kube-system
	
	
	==> coredns [278b4b7546c8cbb271aa40024385c9a2115953314e4b6fd1f291d9686c2f7374] <==
	[INFO] 10.244.0.16:40850 - 22601 "A IN registry.kube-system.svc.cluster.local.local. udp 62 false 512" NXDOMAIN qr,rd,ra 62 0.003197673s
	[INFO] 10.244.0.16:50204 - 30873 "AAAA IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,aa,rd,ra 204 0.000093382s
	[INFO] 10.244.0.16:50204 - 30538 "A IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,aa,rd,ra 204 0.000126522s
	[INFO] 10.244.0.16:33357 - 50389 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000066301s
	[INFO] 10.244.0.16:33357 - 50717 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000117178s
	[INFO] 10.244.0.16:34239 - 64276 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000069267s
	[INFO] 10.244.0.16:34239 - 64024 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000105665s
	[INFO] 10.244.0.16:52310 - 52383 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000118849s
	[INFO] 10.244.0.16:52310 - 52596 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00019602s
	[INFO] 10.244.0.22:37694 - 40578 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00023172s
	[INFO] 10.244.0.22:35755 - 61812 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000325257s
	[INFO] 10.244.0.22:51685 - 55318 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000137504s
	[INFO] 10.244.0.22:38462 - 24515 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000218678s
	[INFO] 10.244.0.22:42242 - 59127 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000151746s
	[INFO] 10.244.0.22:35027 - 4995 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000155744s
	[INFO] 10.244.0.22:34705 - 2579 "A IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.003633638s
	[INFO] 10.244.0.22:37237 - 11493 "AAAA IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.003650225s
	[INFO] 10.244.0.22:50949 - 4182 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.004757286s
	[INFO] 10.244.0.22:54153 - 39426 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.004891933s
	[INFO] 10.244.0.22:42992 - 14299 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005172396s
	[INFO] 10.244.0.22:35143 - 40165 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.006667947s
	[INFO] 10.244.0.22:54182 - 48169 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004977609s
	[INFO] 10.244.0.22:49304 - 18842 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.00722354s
	[INFO] 10.244.0.22:35535 - 44233 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002203614s
	[INFO] 10.244.0.22:60572 - 61408 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.002651003s
	
	
	==> describe nodes <==
	Name:               addons-143775
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-143775
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=18273cc699fc357fbf4f93654efe4966698a9f22
	                    minikube.k8s.io/name=addons-143775
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_13T21_19_07_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-143775
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-143775"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 13 Oct 2025 21:19:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-143775
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 13 Oct 2025 21:21:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 13 Oct 2025 21:21:09 +0000   Mon, 13 Oct 2025 21:19:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 13 Oct 2025 21:21:09 +0000   Mon, 13 Oct 2025 21:19:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 13 Oct 2025 21:21:09 +0000   Mon, 13 Oct 2025 21:19:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 13 Oct 2025 21:21:09 +0000   Mon, 13 Oct 2025 21:19:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-143775
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863444Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863444Ki
	  pods:               110
	System Info:
	  Machine ID:                 66f4c9907daa2bafbf78246f68ed04ba
	  System UUID:                5ac6ceea-0799-4c1e-8b09-5c6dad1bf3ad
	  Boot ID:                    981b04c4-08c1-4321-af44-0fa889f5f1d8
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  default                     cloud-spanner-emulator-86bd5cbb97-tr882      0 (0%)        0 (0%)      0 (0%)           0 (0%)         118s
	  gadget                      gadget-lkcrw                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         118s
	  gcp-auth                    gcp-auth-78565c9fb4-drvz6                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-cvxfz    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         117s
	  kube-system                 amd-gpu-device-plugin-ppkwz                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         77s
	  kube-system                 coredns-66bc5c9577-hrwcq                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     119s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 csi-hostpathplugin-74gj5                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         77s
	  kube-system                 etcd-addons-143775                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m5s
	  kube-system                 kindnet-gxtvs                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      119s
	  kube-system                 kube-apiserver-addons-143775                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 kube-controller-manager-addons-143775        200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-proxy-m55cq                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 kube-scheduler-addons-143775                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 metrics-server-85b7d694d7-vdzpz              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         118s
	  kube-system                 nvidia-device-plugin-daemonset-dncl2         0 (0%)        0 (0%)      0 (0%)           0 (0%)         77s
	  kube-system                 registry-6b586f9694-h4pdt                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 registry-creds-764b6fb674-skkk5              0 (0%)        0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 registry-proxy-rrhdd                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         77s
	  kube-system                 snapshot-controller-7d9fbc56b8-kkj6s         0 (0%)        0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 snapshot-controller-7d9fbc56b8-zv74f         0 (0%)        0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         118s
	  local-path-storage          local-path-provisioner-648f6765c9-6dwg5      0 (0%)        0 (0%)      0 (0%)           0 (0%)         118s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-j4nvc               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     118s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 117s                 kube-proxy       
	  Normal  Starting                 2m9s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m9s (x8 over 2m9s)  kubelet          Node addons-143775 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m9s (x8 over 2m9s)  kubelet          Node addons-143775 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m9s (x8 over 2m9s)  kubelet          Node addons-143775 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m5s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m5s                 kubelet          Node addons-143775 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m5s                 kubelet          Node addons-143775 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m5s                 kubelet          Node addons-143775 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2m                   node-controller  Node addons-143775 event: Registered Node addons-143775 in Controller
	  Normal  NodeReady                78s                  kubelet          Node addons-143775 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct13 20:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001860] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.087012] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.414628] i8042: Warning: Keylock active
	[  +0.017552] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004900] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.001065] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000958] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000927] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.001023] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000942] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000760] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000814] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000752] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.514114] block sda: the capability attribute has been deprecated.
	[  +0.099627] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.027042] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.612816] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [4f9c304b23eabe02395b1858ed2ac76623d0b6f4887bb6ee139a97ff4d2ea01e] <==
	{"level":"warn","ts":"2025-10-13T21:19:03.881889Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:19:03.888928Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:19:03.902496Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:19:03.909549Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:19:03.915908Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:19:03.964617Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:19:15.065445Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:19:15.072717Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:19:41.566555Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:19:41.574740Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:19:41.593148Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:19:41.600235Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57246","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-13T21:20:10.096002Z","caller":"traceutil/trace.go:172","msg":"trace[1412483323] transaction","detail":"{read_only:false; response_revision:1044; number_of_response:1; }","duration":"126.617907ms","start":"2025-10-13T21:20:09.969340Z","end":"2025-10-13T21:20:10.095958Z","steps":["trace[1412483323] 'process raft request'  (duration: 117.373687ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-13T21:20:10.223826Z","caller":"traceutil/trace.go:172","msg":"trace[112034507] transaction","detail":"{read_only:false; response_revision:1046; number_of_response:1; }","duration":"119.053308ms","start":"2025-10-13T21:20:10.104751Z","end":"2025-10-13T21:20:10.223804Z","steps":["trace[112034507] 'process raft request'  (duration: 97.050551ms)","trace[112034507] 'compare'  (duration: 21.898434ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-13T21:20:16.108251Z","caller":"traceutil/trace.go:172","msg":"trace[1288555985] transaction","detail":"{read_only:false; response_revision:1129; number_of_response:1; }","duration":"112.647003ms","start":"2025-10-13T21:20:15.995571Z","end":"2025-10-13T21:20:16.108218Z","steps":["trace[1288555985] 'process raft request'  (duration: 101.143409ms)","trace[1288555985] 'compare'  (duration: 11.306947ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-13T21:20:16.333074Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"138.591605ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/volumeattributesclasses\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-13T21:20:16.333188Z","caller":"traceutil/trace.go:172","msg":"trace[1456737397] range","detail":"{range_begin:/registry/volumeattributesclasses; range_end:; response_count:0; response_revision:1133; }","duration":"138.731306ms","start":"2025-10-13T21:20:16.194438Z","end":"2025-10-13T21:20:16.333170Z","steps":["trace[1456737397] 'agreement among raft nodes before linearized reading'  (duration: 64.699957ms)","trace[1456737397] 'range keys from in-memory index tree'  (duration: 73.857235ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-13T21:20:16.333181Z","caller":"traceutil/trace.go:172","msg":"trace[1865999962] transaction","detail":"{read_only:false; response_revision:1135; number_of_response:1; }","duration":"134.45271ms","start":"2025-10-13T21:20:16.198713Z","end":"2025-10-13T21:20:16.333166Z","steps":["trace[1865999962] 'process raft request'  (duration: 134.416234ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-13T21:20:16.333200Z","caller":"traceutil/trace.go:172","msg":"trace[1272552941] transaction","detail":"{read_only:false; response_revision:1134; number_of_response:1; }","duration":"175.928396ms","start":"2025-10-13T21:20:16.157255Z","end":"2025-10-13T21:20:16.333183Z","steps":["trace[1272552941] 'process raft request'  (duration: 101.896571ms)","trace[1272552941] 'compare'  (duration: 73.832683ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-13T21:20:16.333274Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"115.756044ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiextensions.k8s.io/customresourcedefinitions\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2025-10-13T21:20:16.333300Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"108.690863ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-13T21:20:16.333327Z","caller":"traceutil/trace.go:172","msg":"trace[973915948] range","detail":"{range_begin:/registry/apiextensions.k8s.io/customresourcedefinitions; range_end:; response_count:0; response_revision:1135; }","duration":"115.824732ms","start":"2025-10-13T21:20:16.217492Z","end":"2025-10-13T21:20:16.333317Z","steps":["trace[973915948] 'agreement among raft nodes before linearized reading'  (duration: 115.709322ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-13T21:20:16.333340Z","caller":"traceutil/trace.go:172","msg":"trace[1818805710] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1135; }","duration":"108.735262ms","start":"2025-10-13T21:20:16.224596Z","end":"2025-10-13T21:20:16.333331Z","steps":["trace[1818805710] 'agreement among raft nodes before linearized reading'  (duration: 108.666457ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-13T21:20:25.265744Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"100.742383ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/snapshot.storage.k8s.io/volumesnapshotclasses\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-13T21:20:25.265856Z","caller":"traceutil/trace.go:172","msg":"trace[534535115] range","detail":"{range_begin:/registry/snapshot.storage.k8s.io/volumesnapshotclasses; range_end:; response_count:0; response_revision:1158; }","duration":"100.872855ms","start":"2025-10-13T21:20:25.164960Z","end":"2025-10-13T21:20:25.265832Z","steps":["trace[534535115] 'range keys from in-memory index tree'  (duration: 100.617376ms)"],"step_count":1}
	
	
	==> gcp-auth [1ca16a8f31ca6b8e660253e4041382c226282d784a9c6661b7394d9464b80c6b] <==
	2025/10/13 21:20:32 GCP Auth Webhook started!
	2025/10/13 21:21:02 Ready to marshal response ...
	2025/10/13 21:21:02 Ready to write response ...
	2025/10/13 21:21:02 Ready to marshal response ...
	2025/10/13 21:21:02 Ready to write response ...
	2025/10/13 21:21:02 Ready to marshal response ...
	2025/10/13 21:21:02 Ready to write response ...
	
	
	==> kernel <==
	 21:21:11 up  1:03,  0 user,  load average: 1.64, 28.86, 58.69
	Linux addons-143775 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4a3e089044a38a8fa3d24e8f2669f6febf78e7a168cc30e9e798a894d91d7057] <==
	I1013 21:19:13.667197       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1013 21:19:13.667399       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1013 21:19:43.668375       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1013 21:19:43.670747       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1013 21:19:43.670915       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1013 21:19:43.698407       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1013 21:19:45.067921       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1013 21:19:45.067954       1 metrics.go:72] Registering metrics
	I1013 21:19:45.068019       1 controller.go:711] "Syncing nftables rules"
	I1013 21:19:53.671871       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 21:19:53.671948       1 main.go:301] handling current node
	I1013 21:20:03.667152       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 21:20:03.667252       1 main.go:301] handling current node
	I1013 21:20:13.666631       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 21:20:13.666669       1 main.go:301] handling current node
	I1013 21:20:23.672229       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 21:20:23.672283       1 main.go:301] handling current node
	I1013 21:20:33.667157       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 21:20:33.667186       1 main.go:301] handling current node
	I1013 21:20:43.668691       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 21:20:43.668785       1 main.go:301] handling current node
	I1013 21:20:53.667285       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 21:20:53.667340       1 main.go:301] handling current node
	I1013 21:21:03.667229       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 21:21:03.667280       1 main.go:301] handling current node
	
	
	==> kube-apiserver [fc72bcf650d5a023cf7f3dc7bac0e28433de3d691982471fd25a7d139667d786] <==
	W1013 21:19:15.072659       1 logging.go:55] [core] [Channel #263 SubChannel #264]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	I1013 21:19:20.660956       1 alloc.go:328] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.102.115.42"}
	W1013 21:19:41.566447       1 logging.go:55] [core] [Channel #270 SubChannel #271]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1013 21:19:41.574665       1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1013 21:19:41.593104       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1013 21:19:41.600153       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1013 21:19:54.016822       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.102.115.42:443: connect: connection refused
	E1013 21:19:54.016945       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.102.115.42:443: connect: connection refused" logger="UnhandledError"
	W1013 21:19:54.016965       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.102.115.42:443: connect: connection refused
	E1013 21:19:54.017025       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.102.115.42:443: connect: connection refused" logger="UnhandledError"
	W1013 21:19:54.037647       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.102.115.42:443: connect: connection refused
	E1013 21:19:54.037693       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.102.115.42:443: connect: connection refused" logger="UnhandledError"
	W1013 21:19:54.043334       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.102.115.42:443: connect: connection refused
	E1013 21:19:54.043376       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.102.115.42:443: connect: connection refused" logger="UnhandledError"
	W1013 21:20:11.992630       1 handler_proxy.go:99] no RequestInfo found in the context
	E1013 21:20:11.992655       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.65.187:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.65.187:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.108.65.187:443: connect: connection refused" logger="UnhandledError"
	E1013 21:20:11.992703       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1013 21:20:11.993150       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.65.187:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.65.187:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.108.65.187:443: connect: connection refused" logger="UnhandledError"
	E1013 21:20:11.998319       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.65.187:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.65.187:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.108.65.187:443: connect: connection refused" logger="UnhandledError"
	I1013 21:20:12.048418       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1013 21:21:09.916275       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:51806: use of closed network connection
	E1013 21:21:10.069433       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:51828: use of closed network connection
	
	
	==> kube-controller-manager [6cbf217264895ef9824d482611b2de76f8f55105f2f0de78e29b7723da223ae9] <==
	I1013 21:19:11.549475       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1013 21:19:11.549594       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1013 21:19:11.549699       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1013 21:19:11.549717       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1013 21:19:11.549814       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1013 21:19:11.549940       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1013 21:19:11.550066       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1013 21:19:11.552320       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1013 21:19:11.552336       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1013 21:19:11.553200       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1013 21:19:11.553221       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1013 21:19:11.554064       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1013 21:19:11.554067       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1013 21:19:11.556330       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1013 21:19:11.570272       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1013 21:19:41.558763       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1013 21:19:41.558917       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1013 21:19:41.558974       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1013 21:19:41.580056       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1013 21:19:41.585033       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1013 21:19:41.660006       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1013 21:19:41.685522       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 21:19:56.478843       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1013 21:20:11.665562       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1013 21:20:11.692800       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [ac355aa00aaae919e93c529f16cc645b44f261b30a90efb74492d107645c316b] <==
	I1013 21:19:13.390136       1 server_linux.go:53] "Using iptables proxy"
	I1013 21:19:13.601601       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1013 21:19:13.702137       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1013 21:19:13.702180       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1013 21:19:13.702294       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1013 21:19:13.774189       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1013 21:19:13.775067       1 server_linux.go:132] "Using iptables Proxier"
	I1013 21:19:13.784459       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1013 21:19:13.791656       1 server.go:527] "Version info" version="v1.34.1"
	I1013 21:19:13.791850       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 21:19:13.793950       1 config.go:200] "Starting service config controller"
	I1013 21:19:13.796099       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1013 21:19:13.794394       1 config.go:106] "Starting endpoint slice config controller"
	I1013 21:19:13.796194       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1013 21:19:13.794416       1 config.go:403] "Starting serviceCIDR config controller"
	I1013 21:19:13.796242       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1013 21:19:13.795285       1 config.go:309] "Starting node config controller"
	I1013 21:19:13.796290       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1013 21:19:13.796314       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1013 21:19:13.896355       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1013 21:19:13.896364       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1013 21:19:13.896451       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [c0af2973488b6dc3da55356729afdb472a4832a5e5acfe7dee831dc817711363] <==
	I1013 21:19:05.169728       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 21:19:05.171515       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 21:19:05.171550       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 21:19:05.171761       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1013 21:19:05.171816       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1013 21:19:05.174153       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1013 21:19:05.174477       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1013 21:19:05.174496       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1013 21:19:05.174666       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1013 21:19:05.174830       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1013 21:19:05.174981       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1013 21:19:05.175082       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1013 21:19:05.175149       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1013 21:19:05.175404       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1013 21:19:05.175508       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1013 21:19:05.175656       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1013 21:19:05.175510       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1013 21:19:05.175681       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1013 21:19:05.175679       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1013 21:19:05.175887       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1013 21:19:05.175889       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1013 21:19:05.175165       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1013 21:19:05.176062       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1013 21:19:05.176838       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	I1013 21:19:06.571856       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 13 21:20:16 addons-143775 kubelet[1297]: I1013 21:20:16.123329    1297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/amd-gpu-device-plugin-ppkwz" podStartSLOduration=1.188934904 podStartE2EDuration="22.123303222s" podCreationTimestamp="2025-10-13 21:19:54 +0000 UTC" firstStartedPulling="2025-10-13 21:19:54.488296323 +0000 UTC m=+47.836442598" lastFinishedPulling="2025-10-13 21:20:15.42266464 +0000 UTC m=+68.770810916" observedRunningTime="2025-10-13 21:20:16.110640644 +0000 UTC m=+69.458786939" watchObservedRunningTime="2025-10-13 21:20:16.123303222 +0000 UTC m=+69.471449518"
	Oct 13 21:20:16 addons-143775 kubelet[1297]: I1013 21:20:16.995562    1297 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-ppkwz" secret="" err="secret \"gcp-auth\" not found"
	Oct 13 21:20:24 addons-143775 kubelet[1297]: I1013 21:20:24.023500    1297 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-dncl2" secret="" err="secret \"gcp-auth\" not found"
	Oct 13 21:20:24 addons-143775 kubelet[1297]: I1013 21:20:24.035488    1297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/nvidia-device-plugin-daemonset-dncl2" podStartSLOduration=1.532591385 podStartE2EDuration="30.035462009s" podCreationTimestamp="2025-10-13 21:19:54 +0000 UTC" firstStartedPulling="2025-10-13 21:19:54.489485658 +0000 UTC m=+47.837631943" lastFinishedPulling="2025-10-13 21:20:22.99235629 +0000 UTC m=+76.340502567" observedRunningTime="2025-10-13 21:20:24.034858746 +0000 UTC m=+77.383005027" watchObservedRunningTime="2025-10-13 21:20:24.035462009 +0000 UTC m=+77.383608305"
	Oct 13 21:20:25 addons-143775 kubelet[1297]: I1013 21:20:25.034403    1297 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-rrhdd" secret="" err="secret \"gcp-auth\" not found"
	Oct 13 21:20:25 addons-143775 kubelet[1297]: I1013 21:20:25.034772    1297 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-dncl2" secret="" err="secret \"gcp-auth\" not found"
	Oct 13 21:20:25 addons-143775 kubelet[1297]: E1013 21:20:25.892629    1297 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Oct 13 21:20:25 addons-143775 kubelet[1297]: E1013 21:20:25.892736    1297 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a746932d-4fa8-46a2-96bc-caf52484966b-gcr-creds podName:a746932d-4fa8-46a2-96bc-caf52484966b nodeName:}" failed. No retries permitted until 2025-10-13 21:20:57.892711524 +0000 UTC m=+111.240857818 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/a746932d-4fa8-46a2-96bc-caf52484966b-gcr-creds") pod "registry-creds-764b6fb674-skkk5" (UID: "a746932d-4fa8-46a2-96bc-caf52484966b") : secret "registry-creds-gcr" not found
	Oct 13 21:20:26 addons-143775 kubelet[1297]: I1013 21:20:26.038278    1297 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-rrhdd" secret="" err="secret \"gcp-auth\" not found"
	Oct 13 21:20:29 addons-143775 kubelet[1297]: I1013 21:20:29.064044    1297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-675c5ddd98-cvxfz" podStartSLOduration=57.615137984 podStartE2EDuration="1m15.064026122s" podCreationTimestamp="2025-10-13 21:19:14 +0000 UTC" firstStartedPulling="2025-10-13 21:20:10.786653168 +0000 UTC m=+64.134799445" lastFinishedPulling="2025-10-13 21:20:28.235541294 +0000 UTC m=+81.583687583" observedRunningTime="2025-10-13 21:20:29.063908283 +0000 UTC m=+82.412054579" watchObservedRunningTime="2025-10-13 21:20:29.064026122 +0000 UTC m=+82.412172417"
	Oct 13 21:20:29 addons-143775 kubelet[1297]: I1013 21:20:29.065133    1297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-proxy-rrhdd" podStartSLOduration=5.275024971 podStartE2EDuration="35.065116864s" podCreationTimestamp="2025-10-13 21:19:54 +0000 UTC" firstStartedPulling="2025-10-13 21:19:54.513128678 +0000 UTC m=+47.861274971" lastFinishedPulling="2025-10-13 21:20:24.303220591 +0000 UTC m=+77.651366864" observedRunningTime="2025-10-13 21:20:25.052462581 +0000 UTC m=+78.400608877" watchObservedRunningTime="2025-10-13 21:20:29.065116864 +0000 UTC m=+82.413263159"
	Oct 13 21:20:33 addons-143775 kubelet[1297]: I1013 21:20:33.083364    1297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gadget/gadget-lkcrw" podStartSLOduration=70.241701756 podStartE2EDuration="1m20.083338771s" podCreationTimestamp="2025-10-13 21:19:13 +0000 UTC" firstStartedPulling="2025-10-13 21:20:21.785439596 +0000 UTC m=+75.133585874" lastFinishedPulling="2025-10-13 21:20:31.627076612 +0000 UTC m=+84.975222889" observedRunningTime="2025-10-13 21:20:32.082756608 +0000 UTC m=+85.430902918" watchObservedRunningTime="2025-10-13 21:20:33.083338771 +0000 UTC m=+86.431485065"
	Oct 13 21:20:33 addons-143775 kubelet[1297]: I1013 21:20:33.083583    1297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-78565c9fb4-drvz6" podStartSLOduration=67.338350399 podStartE2EDuration="1m13.083568982s" podCreationTimestamp="2025-10-13 21:19:20 +0000 UTC" firstStartedPulling="2025-10-13 21:20:27.105496284 +0000 UTC m=+80.453642559" lastFinishedPulling="2025-10-13 21:20:32.850714847 +0000 UTC m=+86.198861142" observedRunningTime="2025-10-13 21:20:33.082314747 +0000 UTC m=+86.430461043" watchObservedRunningTime="2025-10-13 21:20:33.083568982 +0000 UTC m=+86.431715277"
	Oct 13 21:20:34 addons-143775 kubelet[1297]: I1013 21:20:34.798418    1297 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Oct 13 21:20:34 addons-143775 kubelet[1297]: I1013 21:20:34.798466    1297 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Oct 13 21:20:37 addons-143775 kubelet[1297]: I1013 21:20:37.114426    1297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-74gj5" podStartSLOduration=1.134071611 podStartE2EDuration="43.114402462s" podCreationTimestamp="2025-10-13 21:19:54 +0000 UTC" firstStartedPulling="2025-10-13 21:19:54.48556778 +0000 UTC m=+47.833714073" lastFinishedPulling="2025-10-13 21:20:36.465898646 +0000 UTC m=+89.814044924" observedRunningTime="2025-10-13 21:20:37.112311534 +0000 UTC m=+90.460457852" watchObservedRunningTime="2025-10-13 21:20:37.114402462 +0000 UTC m=+90.462548759"
	Oct 13 21:20:46 addons-143775 kubelet[1297]: I1013 21:20:46.738868    1297 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1480305-0aa9-40a7-915f-996aed30b119" path="/var/lib/kubelet/pods/e1480305-0aa9-40a7-915f-996aed30b119/volumes"
	Oct 13 21:20:46 addons-143775 kubelet[1297]: I1013 21:20:46.739308    1297 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eab40d81-4668-48c2-92ff-c40103171e1b" path="/var/lib/kubelet/pods/eab40d81-4668-48c2-92ff-c40103171e1b/volumes"
	Oct 13 21:20:57 addons-143775 kubelet[1297]: E1013 21:20:57.947013    1297 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Oct 13 21:20:57 addons-143775 kubelet[1297]: E1013 21:20:57.947148    1297 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a746932d-4fa8-46a2-96bc-caf52484966b-gcr-creds podName:a746932d-4fa8-46a2-96bc-caf52484966b nodeName:}" failed. No retries permitted until 2025-10-13 21:22:01.94712824 +0000 UTC m=+175.295274514 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/a746932d-4fa8-46a2-96bc-caf52484966b-gcr-creds") pod "registry-creds-764b6fb674-skkk5" (UID: "a746932d-4fa8-46a2-96bc-caf52484966b") : secret "registry-creds-gcr" not found
	Oct 13 21:21:02 addons-143775 kubelet[1297]: I1013 21:21:02.882830    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/c963d75c-856f-4d71-8188-3a63254f88b8-gcp-creds\") pod \"busybox\" (UID: \"c963d75c-856f-4d71-8188-3a63254f88b8\") " pod="default/busybox"
	Oct 13 21:21:02 addons-143775 kubelet[1297]: I1013 21:21:02.882893    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6c27\" (UniqueName: \"kubernetes.io/projected/c963d75c-856f-4d71-8188-3a63254f88b8-kube-api-access-x6c27\") pod \"busybox\" (UID: \"c963d75c-856f-4d71-8188-3a63254f88b8\") " pod="default/busybox"
	Oct 13 21:21:04 addons-143775 kubelet[1297]: I1013 21:21:04.217049    1297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.503360352 podStartE2EDuration="2.217025989s" podCreationTimestamp="2025-10-13 21:21:02 +0000 UTC" firstStartedPulling="2025-10-13 21:21:03.092215833 +0000 UTC m=+116.440362107" lastFinishedPulling="2025-10-13 21:21:03.805881471 +0000 UTC m=+117.154027744" observedRunningTime="2025-10-13 21:21:04.215549744 +0000 UTC m=+117.563696049" watchObservedRunningTime="2025-10-13 21:21:04.217025989 +0000 UTC m=+117.565172284"
	Oct 13 21:21:06 addons-143775 kubelet[1297]: I1013 21:21:06.732892    1297 scope.go:117] "RemoveContainer" containerID="b5bd98cf30e8a65d0abdddb830a87022ebbceea35febd16106f62d402bdc9578"
	Oct 13 21:21:06 addons-143775 kubelet[1297]: I1013 21:21:06.742188    1297 scope.go:117] "RemoveContainer" containerID="163da1840ce71d6675d25dfb4a95eb6d8bddaeb14fa335ae1836aaba52e75cf1"
	
	
	==> storage-provisioner [e208e9862015d533dd0968f46404f89356f7ec7b3132965b94044f2e6a69cf3b] <==
	W1013 21:20:46.948420       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:20:48.951605       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:20:48.956519       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:20:50.959778       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:20:50.963796       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:20:52.966628       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:20:52.972019       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:20:54.975257       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:20:54.979629       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:20:56.982288       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:20:56.987896       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:20:58.990814       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:20:58.994690       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:21:00.997493       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:21:01.002511       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:21:03.005521       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:21:03.009507       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:21:05.013089       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:21:05.017426       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:21:07.020508       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:21:07.026846       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:21:09.030313       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:21:09.036919       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:21:11.040718       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:21:11.044858       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-143775 -n addons-143775
helpers_test.go:269: (dbg) Run:  kubectl --context addons-143775 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-jm9d9 ingress-nginx-admission-patch-nrsqr registry-creds-764b6fb674-skkk5
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-143775 describe pod ingress-nginx-admission-create-jm9d9 ingress-nginx-admission-patch-nrsqr registry-creds-764b6fb674-skkk5
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-143775 describe pod ingress-nginx-admission-create-jm9d9 ingress-nginx-admission-patch-nrsqr registry-creds-764b6fb674-skkk5: exit status 1 (62.615019ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-jm9d9" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-nrsqr" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-skkk5" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-143775 describe pod ingress-nginx-admission-create-jm9d9 ingress-nginx-admission-patch-nrsqr registry-creds-764b6fb674-skkk5: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-143775 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-143775 addons disable headlamp --alsologtostderr -v=1: exit status 11 (243.971897ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 21:21:12.702458  241564 out.go:360] Setting OutFile to fd 1 ...
	I1013 21:21:12.702789  241564 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:21:12.702808  241564 out.go:374] Setting ErrFile to fd 2...
	I1013 21:21:12.702812  241564 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:21:12.703106  241564 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-226873/.minikube/bin
	I1013 21:21:12.703466  241564 mustload.go:65] Loading cluster: addons-143775
	I1013 21:21:12.703941  241564 config.go:182] Loaded profile config "addons-143775": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:21:12.703961  241564 addons.go:606] checking whether the cluster is paused
	I1013 21:21:12.704076  241564 config.go:182] Loaded profile config "addons-143775": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:21:12.704092  241564 host.go:66] Checking if "addons-143775" exists ...
	I1013 21:21:12.704489  241564 cli_runner.go:164] Run: docker container inspect addons-143775 --format={{.State.Status}}
	I1013 21:21:12.721753  241564 ssh_runner.go:195] Run: systemctl --version
	I1013 21:21:12.721815  241564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-143775
	I1013 21:21:12.743041  241564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/addons-143775/id_rsa Username:docker}
	I1013 21:21:12.840048  241564 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 21:21:12.840150  241564 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 21:21:12.871792  241564 cri.go:89] found id: "33180043b49d2660b0b0b600c82306371a56f15be0f76fa12958684f8d911ab7"
	I1013 21:21:12.871814  241564 cri.go:89] found id: "29890b5558c66356cd00456d113ffbcb24b0560b6c7702281cc2b7832a9068d6"
	I1013 21:21:12.871819  241564 cri.go:89] found id: "0f56c52e6564ab264ee594edcb66e9f9db567c3d24471d2a8f79d82a5a385ecb"
	I1013 21:21:12.871824  241564 cri.go:89] found id: "dfd7f05ad90ea3b762daf7d97c4592e5f4cbe1ee5068a1ad9aae0dd44a46e977"
	I1013 21:21:12.871828  241564 cri.go:89] found id: "d3f41f21c86bd23b22b1ab82d1c432fc3df136f2ba776767673d0a1e38e70f57"
	I1013 21:21:12.871833  241564 cri.go:89] found id: "178e4409ca2b654b564cbef10d9087938f99ba1aff31a5af597008f5e505b073"
	I1013 21:21:12.871838  241564 cri.go:89] found id: "8d550cc3998c8b6fec3758bb4e81bf21f3792cdc452eaaf1573264c6d0da9c28"
	I1013 21:21:12.871842  241564 cri.go:89] found id: "57bd7bb06e366a05919fc26428aa0bbcd8e88c8e1503a650860ff4f6a69f0061"
	I1013 21:21:12.871859  241564 cri.go:89] found id: "03f55a19579f67bc53cdbf0555efc903f2df5a19107488ff4da9f693ae3d67be"
	I1013 21:21:12.871867  241564 cri.go:89] found id: "37d832fcb8c1f765f5710ea404d8d3238e6fc7a303954f93298b062481a9391f"
	I1013 21:21:12.871872  241564 cri.go:89] found id: "0316d05383999cb939c985fa5634e71b5f4766c07b29cb7b3f2db7cbd6783337"
	I1013 21:21:12.871876  241564 cri.go:89] found id: "630a251fc66ba47575f7dd7a06f4331d0ef17e4f414acb828ab6faab74a9d57d"
	I1013 21:21:12.871881  241564 cri.go:89] found id: "03c7460cdbd20bb306bb9b6b11e7d73452607a8503a269384f8624ceaf29065e"
	I1013 21:21:12.871885  241564 cri.go:89] found id: "0e9754c3036dfd2b0b62663ec77dd65bc2a44adab66d445bdc945a020f3d0fbc"
	I1013 21:21:12.871889  241564 cri.go:89] found id: "e57df483a324fce39e093dadf731dd3ec5c0ce557b47f472dc708e8af7d2b537"
	I1013 21:21:12.871905  241564 cri.go:89] found id: "a21bb2b294cead5d90e3f5593637bc6716719945f5e23d06cf01617fdee3e75e"
	I1013 21:21:12.871912  241564 cri.go:89] found id: "278b4b7546c8cbb271aa40024385c9a2115953314e4b6fd1f291d9686c2f7374"
	I1013 21:21:12.871919  241564 cri.go:89] found id: "e208e9862015d533dd0968f46404f89356f7ec7b3132965b94044f2e6a69cf3b"
	I1013 21:21:12.871923  241564 cri.go:89] found id: "4a3e089044a38a8fa3d24e8f2669f6febf78e7a168cc30e9e798a894d91d7057"
	I1013 21:21:12.871927  241564 cri.go:89] found id: "ac355aa00aaae919e93c529f16cc645b44f261b30a90efb74492d107645c316b"
	I1013 21:21:12.871931  241564 cri.go:89] found id: "fc72bcf650d5a023cf7f3dc7bac0e28433de3d691982471fd25a7d139667d786"
	I1013 21:21:12.871934  241564 cri.go:89] found id: "4f9c304b23eabe02395b1858ed2ac76623d0b6f4887bb6ee139a97ff4d2ea01e"
	I1013 21:21:12.871937  241564 cri.go:89] found id: "c0af2973488b6dc3da55356729afdb472a4832a5e5acfe7dee831dc817711363"
	I1013 21:21:12.871939  241564 cri.go:89] found id: "6cbf217264895ef9824d482611b2de76f8f55105f2f0de78e29b7723da223ae9"
	I1013 21:21:12.871941  241564 cri.go:89] found id: ""
	I1013 21:21:12.872006  241564 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 21:21:12.887059  241564 out.go:203] 
	W1013 21:21:12.888682  241564 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T21:21:12Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T21:21:12Z" level=error msg="open /run/runc: no such file or directory"
	
	W1013 21:21:12.888704  241564 out.go:285] * 
	* 
	W1013 21:21:12.891824  241564 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1013 21:21:12.893292  241564 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-amd64 -p addons-143775 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (2.59s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-86bd5cbb97-tr882" [6e9dc9e5-fc1e-4a19-b9e2-020c0a882900] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004282505s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-143775 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-143775 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (243.741043ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 21:21:26.372322  243458 out.go:360] Setting OutFile to fd 1 ...
	I1013 21:21:26.372600  243458 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:21:26.372612  243458 out.go:374] Setting ErrFile to fd 2...
	I1013 21:21:26.372616  243458 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:21:26.372827  243458 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-226873/.minikube/bin
	I1013 21:21:26.373127  243458 mustload.go:65] Loading cluster: addons-143775
	I1013 21:21:26.373493  243458 config.go:182] Loaded profile config "addons-143775": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:21:26.373510  243458 addons.go:606] checking whether the cluster is paused
	I1013 21:21:26.373589  243458 config.go:182] Loaded profile config "addons-143775": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:21:26.373603  243458 host.go:66] Checking if "addons-143775" exists ...
	I1013 21:21:26.373965  243458 cli_runner.go:164] Run: docker container inspect addons-143775 --format={{.State.Status}}
	I1013 21:21:26.392070  243458 ssh_runner.go:195] Run: systemctl --version
	I1013 21:21:26.392126  243458 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-143775
	I1013 21:21:26.410247  243458 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/addons-143775/id_rsa Username:docker}
	I1013 21:21:26.507932  243458 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 21:21:26.508053  243458 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 21:21:26.539459  243458 cri.go:89] found id: "33180043b49d2660b0b0b600c82306371a56f15be0f76fa12958684f8d911ab7"
	I1013 21:21:26.539481  243458 cri.go:89] found id: "29890b5558c66356cd00456d113ffbcb24b0560b6c7702281cc2b7832a9068d6"
	I1013 21:21:26.539485  243458 cri.go:89] found id: "0f56c52e6564ab264ee594edcb66e9f9db567c3d24471d2a8f79d82a5a385ecb"
	I1013 21:21:26.539488  243458 cri.go:89] found id: "dfd7f05ad90ea3b762daf7d97c4592e5f4cbe1ee5068a1ad9aae0dd44a46e977"
	I1013 21:21:26.539491  243458 cri.go:89] found id: "d3f41f21c86bd23b22b1ab82d1c432fc3df136f2ba776767673d0a1e38e70f57"
	I1013 21:21:26.539493  243458 cri.go:89] found id: "178e4409ca2b654b564cbef10d9087938f99ba1aff31a5af597008f5e505b073"
	I1013 21:21:26.539496  243458 cri.go:89] found id: "8d550cc3998c8b6fec3758bb4e81bf21f3792cdc452eaaf1573264c6d0da9c28"
	I1013 21:21:26.539498  243458 cri.go:89] found id: "57bd7bb06e366a05919fc26428aa0bbcd8e88c8e1503a650860ff4f6a69f0061"
	I1013 21:21:26.539500  243458 cri.go:89] found id: "03f55a19579f67bc53cdbf0555efc903f2df5a19107488ff4da9f693ae3d67be"
	I1013 21:21:26.539505  243458 cri.go:89] found id: "37d832fcb8c1f765f5710ea404d8d3238e6fc7a303954f93298b062481a9391f"
	I1013 21:21:26.539507  243458 cri.go:89] found id: "0316d05383999cb939c985fa5634e71b5f4766c07b29cb7b3f2db7cbd6783337"
	I1013 21:21:26.539510  243458 cri.go:89] found id: "630a251fc66ba47575f7dd7a06f4331d0ef17e4f414acb828ab6faab74a9d57d"
	I1013 21:21:26.539512  243458 cri.go:89] found id: "03c7460cdbd20bb306bb9b6b11e7d73452607a8503a269384f8624ceaf29065e"
	I1013 21:21:26.539514  243458 cri.go:89] found id: "0e9754c3036dfd2b0b62663ec77dd65bc2a44adab66d445bdc945a020f3d0fbc"
	I1013 21:21:26.539517  243458 cri.go:89] found id: "e57df483a324fce39e093dadf731dd3ec5c0ce557b47f472dc708e8af7d2b537"
	I1013 21:21:26.539521  243458 cri.go:89] found id: "a21bb2b294cead5d90e3f5593637bc6716719945f5e23d06cf01617fdee3e75e"
	I1013 21:21:26.539523  243458 cri.go:89] found id: "278b4b7546c8cbb271aa40024385c9a2115953314e4b6fd1f291d9686c2f7374"
	I1013 21:21:26.539528  243458 cri.go:89] found id: "e208e9862015d533dd0968f46404f89356f7ec7b3132965b94044f2e6a69cf3b"
	I1013 21:21:26.539540  243458 cri.go:89] found id: "4a3e089044a38a8fa3d24e8f2669f6febf78e7a168cc30e9e798a894d91d7057"
	I1013 21:21:26.539542  243458 cri.go:89] found id: "ac355aa00aaae919e93c529f16cc645b44f261b30a90efb74492d107645c316b"
	I1013 21:21:26.539545  243458 cri.go:89] found id: "fc72bcf650d5a023cf7f3dc7bac0e28433de3d691982471fd25a7d139667d786"
	I1013 21:21:26.539547  243458 cri.go:89] found id: "4f9c304b23eabe02395b1858ed2ac76623d0b6f4887bb6ee139a97ff4d2ea01e"
	I1013 21:21:26.539549  243458 cri.go:89] found id: "c0af2973488b6dc3da55356729afdb472a4832a5e5acfe7dee831dc817711363"
	I1013 21:21:26.539551  243458 cri.go:89] found id: "6cbf217264895ef9824d482611b2de76f8f55105f2f0de78e29b7723da223ae9"
	I1013 21:21:26.539554  243458 cri.go:89] found id: ""
	I1013 21:21:26.539592  243458 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 21:21:26.553825  243458 out.go:203] 
	W1013 21:21:26.555522  243458 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T21:21:26Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T21:21:26Z" level=error msg="open /run/runc: no such file or directory"
	
	W1013 21:21:26.555543  243458 out.go:285] * 
	* 
	W1013 21:21:26.558621  243458 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1013 21:21:26.560274  243458 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-amd64 -p addons-143775 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.25s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.13s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-143775 apply -f testdata/storage-provisioner-rancher/pvc.yaml
2025/10/13 21:21:23 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:955: (dbg) Run:  kubectl --context addons-143775 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-143775 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-143775 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-143775 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-143775 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-143775 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [97eea71e-dcfd-4432-95cd-86886e584ff6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [97eea71e-dcfd-4432-95cd-86886e584ff6] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [97eea71e-dcfd-4432-95cd-86886e584ff6] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003726954s
addons_test.go:967: (dbg) Run:  kubectl --context addons-143775 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-143775 ssh "cat /opt/local-path-provisioner/pvc-e6be8790-906c-4030-973a-777621257e3a_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-143775 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-143775 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-143775 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-143775 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (241.003015ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 21:21:31.362057  243834 out.go:360] Setting OutFile to fd 1 ...
	I1013 21:21:31.362356  243834 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:21:31.362367  243834 out.go:374] Setting ErrFile to fd 2...
	I1013 21:21:31.362372  243834 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:21:31.362560  243834 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-226873/.minikube/bin
	I1013 21:21:31.362826  243834 mustload.go:65] Loading cluster: addons-143775
	I1013 21:21:31.363185  243834 config.go:182] Loaded profile config "addons-143775": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:21:31.363203  243834 addons.go:606] checking whether the cluster is paused
	I1013 21:21:31.363279  243834 config.go:182] Loaded profile config "addons-143775": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:21:31.363291  243834 host.go:66] Checking if "addons-143775" exists ...
	I1013 21:21:31.363691  243834 cli_runner.go:164] Run: docker container inspect addons-143775 --format={{.State.Status}}
	I1013 21:21:31.381827  243834 ssh_runner.go:195] Run: systemctl --version
	I1013 21:21:31.381892  243834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-143775
	I1013 21:21:31.399768  243834 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/addons-143775/id_rsa Username:docker}
	I1013 21:21:31.497196  243834 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 21:21:31.497295  243834 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 21:21:31.528830  243834 cri.go:89] found id: "33180043b49d2660b0b0b600c82306371a56f15be0f76fa12958684f8d911ab7"
	I1013 21:21:31.528864  243834 cri.go:89] found id: "29890b5558c66356cd00456d113ffbcb24b0560b6c7702281cc2b7832a9068d6"
	I1013 21:21:31.528869  243834 cri.go:89] found id: "0f56c52e6564ab264ee594edcb66e9f9db567c3d24471d2a8f79d82a5a385ecb"
	I1013 21:21:31.528872  243834 cri.go:89] found id: "dfd7f05ad90ea3b762daf7d97c4592e5f4cbe1ee5068a1ad9aae0dd44a46e977"
	I1013 21:21:31.528875  243834 cri.go:89] found id: "d3f41f21c86bd23b22b1ab82d1c432fc3df136f2ba776767673d0a1e38e70f57"
	I1013 21:21:31.528878  243834 cri.go:89] found id: "178e4409ca2b654b564cbef10d9087938f99ba1aff31a5af597008f5e505b073"
	I1013 21:21:31.528881  243834 cri.go:89] found id: "8d550cc3998c8b6fec3758bb4e81bf21f3792cdc452eaaf1573264c6d0da9c28"
	I1013 21:21:31.528883  243834 cri.go:89] found id: "57bd7bb06e366a05919fc26428aa0bbcd8e88c8e1503a650860ff4f6a69f0061"
	I1013 21:21:31.528886  243834 cri.go:89] found id: "03f55a19579f67bc53cdbf0555efc903f2df5a19107488ff4da9f693ae3d67be"
	I1013 21:21:31.528892  243834 cri.go:89] found id: "37d832fcb8c1f765f5710ea404d8d3238e6fc7a303954f93298b062481a9391f"
	I1013 21:21:31.528894  243834 cri.go:89] found id: "0316d05383999cb939c985fa5634e71b5f4766c07b29cb7b3f2db7cbd6783337"
	I1013 21:21:31.528896  243834 cri.go:89] found id: "630a251fc66ba47575f7dd7a06f4331d0ef17e4f414acb828ab6faab74a9d57d"
	I1013 21:21:31.528899  243834 cri.go:89] found id: "03c7460cdbd20bb306bb9b6b11e7d73452607a8503a269384f8624ceaf29065e"
	I1013 21:21:31.528901  243834 cri.go:89] found id: "0e9754c3036dfd2b0b62663ec77dd65bc2a44adab66d445bdc945a020f3d0fbc"
	I1013 21:21:31.528904  243834 cri.go:89] found id: "e57df483a324fce39e093dadf731dd3ec5c0ce557b47f472dc708e8af7d2b537"
	I1013 21:21:31.528908  243834 cri.go:89] found id: "a21bb2b294cead5d90e3f5593637bc6716719945f5e23d06cf01617fdee3e75e"
	I1013 21:21:31.528911  243834 cri.go:89] found id: "278b4b7546c8cbb271aa40024385c9a2115953314e4b6fd1f291d9686c2f7374"
	I1013 21:21:31.528915  243834 cri.go:89] found id: "e208e9862015d533dd0968f46404f89356f7ec7b3132965b94044f2e6a69cf3b"
	I1013 21:21:31.528918  243834 cri.go:89] found id: "4a3e089044a38a8fa3d24e8f2669f6febf78e7a168cc30e9e798a894d91d7057"
	I1013 21:21:31.528926  243834 cri.go:89] found id: "ac355aa00aaae919e93c529f16cc645b44f261b30a90efb74492d107645c316b"
	I1013 21:21:31.528929  243834 cri.go:89] found id: "fc72bcf650d5a023cf7f3dc7bac0e28433de3d691982471fd25a7d139667d786"
	I1013 21:21:31.528932  243834 cri.go:89] found id: "4f9c304b23eabe02395b1858ed2ac76623d0b6f4887bb6ee139a97ff4d2ea01e"
	I1013 21:21:31.528934  243834 cri.go:89] found id: "c0af2973488b6dc3da55356729afdb472a4832a5e5acfe7dee831dc817711363"
	I1013 21:21:31.528937  243834 cri.go:89] found id: "6cbf217264895ef9824d482611b2de76f8f55105f2f0de78e29b7723da223ae9"
	I1013 21:21:31.528939  243834 cri.go:89] found id: ""
	I1013 21:21:31.529029  243834 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 21:21:31.544886  243834 out.go:203] 
	W1013 21:21:31.546454  243834 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T21:21:31Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T21:21:31Z" level=error msg="open /run/runc: no such file or directory"
	
	W1013 21:21:31.546492  243834 out.go:285] * 
	* 
	W1013 21:21:31.549922  243834 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1013 21:21:31.551846  243834 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-amd64 -p addons-143775 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (8.13s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-dncl2" [20aff2ff-0ccf-43d1-b425-3353c5b46b49] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004252788s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-143775 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-143775 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (246.197983ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 21:21:15.364245  241679 out.go:360] Setting OutFile to fd 1 ...
	I1013 21:21:15.364550  241679 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:21:15.364561  241679 out.go:374] Setting ErrFile to fd 2...
	I1013 21:21:15.364567  241679 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:21:15.364785  241679 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-226873/.minikube/bin
	I1013 21:21:15.365097  241679 mustload.go:65] Loading cluster: addons-143775
	I1013 21:21:15.365464  241679 config.go:182] Loaded profile config "addons-143775": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:21:15.365482  241679 addons.go:606] checking whether the cluster is paused
	I1013 21:21:15.365561  241679 config.go:182] Loaded profile config "addons-143775": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:21:15.365573  241679 host.go:66] Checking if "addons-143775" exists ...
	I1013 21:21:15.365949  241679 cli_runner.go:164] Run: docker container inspect addons-143775 --format={{.State.Status}}
	I1013 21:21:15.386070  241679 ssh_runner.go:195] Run: systemctl --version
	I1013 21:21:15.386153  241679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-143775
	I1013 21:21:15.407847  241679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/addons-143775/id_rsa Username:docker}
	I1013 21:21:15.506580  241679 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 21:21:15.506657  241679 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 21:21:15.536960  241679 cri.go:89] found id: "33180043b49d2660b0b0b600c82306371a56f15be0f76fa12958684f8d911ab7"
	I1013 21:21:15.536984  241679 cri.go:89] found id: "29890b5558c66356cd00456d113ffbcb24b0560b6c7702281cc2b7832a9068d6"
	I1013 21:21:15.537002  241679 cri.go:89] found id: "0f56c52e6564ab264ee594edcb66e9f9db567c3d24471d2a8f79d82a5a385ecb"
	I1013 21:21:15.537009  241679 cri.go:89] found id: "dfd7f05ad90ea3b762daf7d97c4592e5f4cbe1ee5068a1ad9aae0dd44a46e977"
	I1013 21:21:15.537013  241679 cri.go:89] found id: "d3f41f21c86bd23b22b1ab82d1c432fc3df136f2ba776767673d0a1e38e70f57"
	I1013 21:21:15.537019  241679 cri.go:89] found id: "178e4409ca2b654b564cbef10d9087938f99ba1aff31a5af597008f5e505b073"
	I1013 21:21:15.537023  241679 cri.go:89] found id: "8d550cc3998c8b6fec3758bb4e81bf21f3792cdc452eaaf1573264c6d0da9c28"
	I1013 21:21:15.537027  241679 cri.go:89] found id: "57bd7bb06e366a05919fc26428aa0bbcd8e88c8e1503a650860ff4f6a69f0061"
	I1013 21:21:15.537032  241679 cri.go:89] found id: "03f55a19579f67bc53cdbf0555efc903f2df5a19107488ff4da9f693ae3d67be"
	I1013 21:21:15.537043  241679 cri.go:89] found id: "37d832fcb8c1f765f5710ea404d8d3238e6fc7a303954f93298b062481a9391f"
	I1013 21:21:15.537051  241679 cri.go:89] found id: "0316d05383999cb939c985fa5634e71b5f4766c07b29cb7b3f2db7cbd6783337"
	I1013 21:21:15.537055  241679 cri.go:89] found id: "630a251fc66ba47575f7dd7a06f4331d0ef17e4f414acb828ab6faab74a9d57d"
	I1013 21:21:15.537059  241679 cri.go:89] found id: "03c7460cdbd20bb306bb9b6b11e7d73452607a8503a269384f8624ceaf29065e"
	I1013 21:21:15.537062  241679 cri.go:89] found id: "0e9754c3036dfd2b0b62663ec77dd65bc2a44adab66d445bdc945a020f3d0fbc"
	I1013 21:21:15.537065  241679 cri.go:89] found id: "e57df483a324fce39e093dadf731dd3ec5c0ce557b47f472dc708e8af7d2b537"
	I1013 21:21:15.537069  241679 cri.go:89] found id: "a21bb2b294cead5d90e3f5593637bc6716719945f5e23d06cf01617fdee3e75e"
	I1013 21:21:15.537074  241679 cri.go:89] found id: "278b4b7546c8cbb271aa40024385c9a2115953314e4b6fd1f291d9686c2f7374"
	I1013 21:21:15.537085  241679 cri.go:89] found id: "e208e9862015d533dd0968f46404f89356f7ec7b3132965b94044f2e6a69cf3b"
	I1013 21:21:15.537087  241679 cri.go:89] found id: "4a3e089044a38a8fa3d24e8f2669f6febf78e7a168cc30e9e798a894d91d7057"
	I1013 21:21:15.537090  241679 cri.go:89] found id: "ac355aa00aaae919e93c529f16cc645b44f261b30a90efb74492d107645c316b"
	I1013 21:21:15.537093  241679 cri.go:89] found id: "fc72bcf650d5a023cf7f3dc7bac0e28433de3d691982471fd25a7d139667d786"
	I1013 21:21:15.537095  241679 cri.go:89] found id: "4f9c304b23eabe02395b1858ed2ac76623d0b6f4887bb6ee139a97ff4d2ea01e"
	I1013 21:21:15.537098  241679 cri.go:89] found id: "c0af2973488b6dc3da55356729afdb472a4832a5e5acfe7dee831dc817711363"
	I1013 21:21:15.537101  241679 cri.go:89] found id: "6cbf217264895ef9824d482611b2de76f8f55105f2f0de78e29b7723da223ae9"
	I1013 21:21:15.537103  241679 cri.go:89] found id: ""
	I1013 21:21:15.537142  241679 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 21:21:15.551744  241679 out.go:203] 
	W1013 21:21:15.553003  241679 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T21:21:15Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T21:21:15Z" level=error msg="open /run/runc: no such file or directory"
	
	W1013 21:21:15.553036  241679 out.go:285] * 
	* 
	W1013 21:21:15.556106  241679 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1013 21:21:15.557259  241679 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-amd64 -p addons-143775 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (5.25s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-j4nvc" [72015d03-6940-400e-bc93-5f9e5fa05a81] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004272279s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-143775 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-143775 addons disable yakd --alsologtostderr -v=1: exit status 11 (248.793403ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 21:21:23.228525  242929 out.go:360] Setting OutFile to fd 1 ...
	I1013 21:21:23.228809  242929 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:21:23.228819  242929 out.go:374] Setting ErrFile to fd 2...
	I1013 21:21:23.228824  242929 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:21:23.229088  242929 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-226873/.minikube/bin
	I1013 21:21:23.229399  242929 mustload.go:65] Loading cluster: addons-143775
	I1013 21:21:23.229748  242929 config.go:182] Loaded profile config "addons-143775": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:21:23.229777  242929 addons.go:606] checking whether the cluster is paused
	I1013 21:21:23.229866  242929 config.go:182] Loaded profile config "addons-143775": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:21:23.229878  242929 host.go:66] Checking if "addons-143775" exists ...
	I1013 21:21:23.230324  242929 cli_runner.go:164] Run: docker container inspect addons-143775 --format={{.State.Status}}
	I1013 21:21:23.249305  242929 ssh_runner.go:195] Run: systemctl --version
	I1013 21:21:23.249380  242929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-143775
	I1013 21:21:23.268629  242929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/addons-143775/id_rsa Username:docker}
	I1013 21:21:23.368688  242929 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 21:21:23.368803  242929 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 21:21:23.403857  242929 cri.go:89] found id: "33180043b49d2660b0b0b600c82306371a56f15be0f76fa12958684f8d911ab7"
	I1013 21:21:23.403890  242929 cri.go:89] found id: "29890b5558c66356cd00456d113ffbcb24b0560b6c7702281cc2b7832a9068d6"
	I1013 21:21:23.403897  242929 cri.go:89] found id: "0f56c52e6564ab264ee594edcb66e9f9db567c3d24471d2a8f79d82a5a385ecb"
	I1013 21:21:23.403902  242929 cri.go:89] found id: "dfd7f05ad90ea3b762daf7d97c4592e5f4cbe1ee5068a1ad9aae0dd44a46e977"
	I1013 21:21:23.403906  242929 cri.go:89] found id: "d3f41f21c86bd23b22b1ab82d1c432fc3df136f2ba776767673d0a1e38e70f57"
	I1013 21:21:23.403909  242929 cri.go:89] found id: "178e4409ca2b654b564cbef10d9087938f99ba1aff31a5af597008f5e505b073"
	I1013 21:21:23.403912  242929 cri.go:89] found id: "8d550cc3998c8b6fec3758bb4e81bf21f3792cdc452eaaf1573264c6d0da9c28"
	I1013 21:21:23.403916  242929 cri.go:89] found id: "57bd7bb06e366a05919fc26428aa0bbcd8e88c8e1503a650860ff4f6a69f0061"
	I1013 21:21:23.403920  242929 cri.go:89] found id: "03f55a19579f67bc53cdbf0555efc903f2df5a19107488ff4da9f693ae3d67be"
	I1013 21:21:23.403928  242929 cri.go:89] found id: "37d832fcb8c1f765f5710ea404d8d3238e6fc7a303954f93298b062481a9391f"
	I1013 21:21:23.403932  242929 cri.go:89] found id: "0316d05383999cb939c985fa5634e71b5f4766c07b29cb7b3f2db7cbd6783337"
	I1013 21:21:23.403937  242929 cri.go:89] found id: "630a251fc66ba47575f7dd7a06f4331d0ef17e4f414acb828ab6faab74a9d57d"
	I1013 21:21:23.403941  242929 cri.go:89] found id: "03c7460cdbd20bb306bb9b6b11e7d73452607a8503a269384f8624ceaf29065e"
	I1013 21:21:23.403946  242929 cri.go:89] found id: "0e9754c3036dfd2b0b62663ec77dd65bc2a44adab66d445bdc945a020f3d0fbc"
	I1013 21:21:23.403950  242929 cri.go:89] found id: "e57df483a324fce39e093dadf731dd3ec5c0ce557b47f472dc708e8af7d2b537"
	I1013 21:21:23.403964  242929 cri.go:89] found id: "a21bb2b294cead5d90e3f5593637bc6716719945f5e23d06cf01617fdee3e75e"
	I1013 21:21:23.403971  242929 cri.go:89] found id: "278b4b7546c8cbb271aa40024385c9a2115953314e4b6fd1f291d9686c2f7374"
	I1013 21:21:23.403977  242929 cri.go:89] found id: "e208e9862015d533dd0968f46404f89356f7ec7b3132965b94044f2e6a69cf3b"
	I1013 21:21:23.403981  242929 cri.go:89] found id: "4a3e089044a38a8fa3d24e8f2669f6febf78e7a168cc30e9e798a894d91d7057"
	I1013 21:21:23.403985  242929 cri.go:89] found id: "ac355aa00aaae919e93c529f16cc645b44f261b30a90efb74492d107645c316b"
	I1013 21:21:23.404020  242929 cri.go:89] found id: "fc72bcf650d5a023cf7f3dc7bac0e28433de3d691982471fd25a7d139667d786"
	I1013 21:21:23.404025  242929 cri.go:89] found id: "4f9c304b23eabe02395b1858ed2ac76623d0b6f4887bb6ee139a97ff4d2ea01e"
	I1013 21:21:23.404029  242929 cri.go:89] found id: "c0af2973488b6dc3da55356729afdb472a4832a5e5acfe7dee831dc817711363"
	I1013 21:21:23.404033  242929 cri.go:89] found id: "6cbf217264895ef9824d482611b2de76f8f55105f2f0de78e29b7723da223ae9"
	I1013 21:21:23.404037  242929 cri.go:89] found id: ""
	I1013 21:21:23.404094  242929 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 21:21:23.418800  242929 out.go:203] 
	W1013 21:21:23.419942  242929 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T21:21:23Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T21:21:23Z" level=error msg="open /run/runc: no such file or directory"
	
	W1013 21:21:23.419965  242929 out.go:285] * 
	* 
	W1013 21:21:23.423916  242929 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1013 21:21:23.425722  242929 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-amd64 -p addons-143775 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (5.25s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (5.24s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-ppkwz" [7266410e-a8ea-4a69-8452-d90353368f92] Running
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 5.003639716s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-143775 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-143775 addons disable amd-gpu-device-plugin --alsologtostderr -v=1: exit status 11 (238.056388ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 21:21:21.119486  242837 out.go:360] Setting OutFile to fd 1 ...
	I1013 21:21:21.119791  242837 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:21:21.119800  242837 out.go:374] Setting ErrFile to fd 2...
	I1013 21:21:21.119804  242837 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:21:21.120028  242837 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-226873/.minikube/bin
	I1013 21:21:21.120296  242837 mustload.go:65] Loading cluster: addons-143775
	I1013 21:21:21.120685  242837 config.go:182] Loaded profile config "addons-143775": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:21:21.120707  242837 addons.go:606] checking whether the cluster is paused
	I1013 21:21:21.120798  242837 config.go:182] Loaded profile config "addons-143775": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:21:21.120811  242837 host.go:66] Checking if "addons-143775" exists ...
	I1013 21:21:21.121240  242837 cli_runner.go:164] Run: docker container inspect addons-143775 --format={{.State.Status}}
	I1013 21:21:21.139504  242837 ssh_runner.go:195] Run: systemctl --version
	I1013 21:21:21.139560  242837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-143775
	I1013 21:21:21.156473  242837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/addons-143775/id_rsa Username:docker}
	I1013 21:21:21.253229  242837 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 21:21:21.253335  242837 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 21:21:21.285180  242837 cri.go:89] found id: "33180043b49d2660b0b0b600c82306371a56f15be0f76fa12958684f8d911ab7"
	I1013 21:21:21.285219  242837 cri.go:89] found id: "29890b5558c66356cd00456d113ffbcb24b0560b6c7702281cc2b7832a9068d6"
	I1013 21:21:21.285225  242837 cri.go:89] found id: "0f56c52e6564ab264ee594edcb66e9f9db567c3d24471d2a8f79d82a5a385ecb"
	I1013 21:21:21.285230  242837 cri.go:89] found id: "dfd7f05ad90ea3b762daf7d97c4592e5f4cbe1ee5068a1ad9aae0dd44a46e977"
	I1013 21:21:21.285233  242837 cri.go:89] found id: "d3f41f21c86bd23b22b1ab82d1c432fc3df136f2ba776767673d0a1e38e70f57"
	I1013 21:21:21.285239  242837 cri.go:89] found id: "178e4409ca2b654b564cbef10d9087938f99ba1aff31a5af597008f5e505b073"
	I1013 21:21:21.285243  242837 cri.go:89] found id: "8d550cc3998c8b6fec3758bb4e81bf21f3792cdc452eaaf1573264c6d0da9c28"
	I1013 21:21:21.285247  242837 cri.go:89] found id: "57bd7bb06e366a05919fc26428aa0bbcd8e88c8e1503a650860ff4f6a69f0061"
	I1013 21:21:21.285251  242837 cri.go:89] found id: "03f55a19579f67bc53cdbf0555efc903f2df5a19107488ff4da9f693ae3d67be"
	I1013 21:21:21.285274  242837 cri.go:89] found id: "37d832fcb8c1f765f5710ea404d8d3238e6fc7a303954f93298b062481a9391f"
	I1013 21:21:21.285282  242837 cri.go:89] found id: "0316d05383999cb939c985fa5634e71b5f4766c07b29cb7b3f2db7cbd6783337"
	I1013 21:21:21.285286  242837 cri.go:89] found id: "630a251fc66ba47575f7dd7a06f4331d0ef17e4f414acb828ab6faab74a9d57d"
	I1013 21:21:21.285291  242837 cri.go:89] found id: "03c7460cdbd20bb306bb9b6b11e7d73452607a8503a269384f8624ceaf29065e"
	I1013 21:21:21.285297  242837 cri.go:89] found id: "0e9754c3036dfd2b0b62663ec77dd65bc2a44adab66d445bdc945a020f3d0fbc"
	I1013 21:21:21.285299  242837 cri.go:89] found id: "e57df483a324fce39e093dadf731dd3ec5c0ce557b47f472dc708e8af7d2b537"
	I1013 21:21:21.285312  242837 cri.go:89] found id: "a21bb2b294cead5d90e3f5593637bc6716719945f5e23d06cf01617fdee3e75e"
	I1013 21:21:21.285320  242837 cri.go:89] found id: "278b4b7546c8cbb271aa40024385c9a2115953314e4b6fd1f291d9686c2f7374"
	I1013 21:21:21.285324  242837 cri.go:89] found id: "e208e9862015d533dd0968f46404f89356f7ec7b3132965b94044f2e6a69cf3b"
	I1013 21:21:21.285326  242837 cri.go:89] found id: "4a3e089044a38a8fa3d24e8f2669f6febf78e7a168cc30e9e798a894d91d7057"
	I1013 21:21:21.285328  242837 cri.go:89] found id: "ac355aa00aaae919e93c529f16cc645b44f261b30a90efb74492d107645c316b"
	I1013 21:21:21.285331  242837 cri.go:89] found id: "fc72bcf650d5a023cf7f3dc7bac0e28433de3d691982471fd25a7d139667d786"
	I1013 21:21:21.285335  242837 cri.go:89] found id: "4f9c304b23eabe02395b1858ed2ac76623d0b6f4887bb6ee139a97ff4d2ea01e"
	I1013 21:21:21.285339  242837 cri.go:89] found id: "c0af2973488b6dc3da55356729afdb472a4832a5e5acfe7dee831dc817711363"
	I1013 21:21:21.285343  242837 cri.go:89] found id: "6cbf217264895ef9824d482611b2de76f8f55105f2f0de78e29b7723da223ae9"
	I1013 21:21:21.285346  242837 cri.go:89] found id: ""
	I1013 21:21:21.285405  242837 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 21:21:21.300053  242837 out.go:203] 
	W1013 21:21:21.301427  242837 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T21:21:21Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T21:21:21Z" level=error msg="open /run/runc: no such file or directory"
	
	W1013 21:21:21.301460  242837 out.go:285] * 
	* 
	W1013 21:21:21.304563  242837 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1013 21:21:21.305948  242837 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable amd-gpu-device-plugin addon: args "out/minikube-linux-amd64 -p addons-143775 addons disable amd-gpu-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/AmdGpuDevicePlugin (5.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (602.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-412292 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-412292 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-jndv7" [4ec0866c-b1c3-4c49-9b43-5e827f6c24df] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-412292 -n functional-412292
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-10-13 21:37:01.036091888 +0000 UTC m=+1123.298170598
functional_test.go:1645: (dbg) Run:  kubectl --context functional-412292 describe po hello-node-connect-7d85dfc575-jndv7 -n default
functional_test.go:1645: (dbg) kubectl --context functional-412292 describe po hello-node-connect-7d85dfc575-jndv7 -n default:
Name:             hello-node-connect-7d85dfc575-jndv7
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-412292/192.168.49.2
Start Time:       Mon, 13 Oct 2025 21:27:00 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.7
IPs:
IP:           10.244.0.7
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-x8nkn (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-x8nkn:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-jndv7 to functional-412292
Normal   Pulling    7m5s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m5s (x5 over 9m56s)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m5s (x5 over 9m56s)    kubelet            Error: ErrImagePull
Warning  Failed     4m47s (x20 over 9m56s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m34s (x21 over 9m56s)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1645: (dbg) Run:  kubectl --context functional-412292 logs hello-node-connect-7d85dfc575-jndv7 -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-412292 logs hello-node-connect-7d85dfc575-jndv7 -n default: exit status 1 (68.768223ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-jndv7" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-412292 logs hello-node-connect-7d85dfc575-jndv7 -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-412292 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-jndv7
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-412292/192.168.49.2
Start Time:       Mon, 13 Oct 2025 21:27:00 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.7
IPs:
IP:           10.244.0.7
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-x8nkn (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-x8nkn:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-jndv7 to functional-412292
Normal   Pulling    7m5s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m5s (x5 over 9m56s)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m5s (x5 over 9m56s)    kubelet            Error: ErrImagePull
Warning  Failed     4m47s (x20 over 9m56s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m34s (x21 over 9m56s)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-412292 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-412292 logs -l app=hello-node-connect: exit status 1 (70.897873ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-jndv7" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-412292 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-412292 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.105.126.177
IPs:                      10.105.126.177
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31404/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-412292
helpers_test.go:243: (dbg) docker inspect functional-412292:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9c2d73f7c225dd4d04d6f6b7096e5576996e7376dcf9fc374757ace0a40eb151",
	        "Created": "2025-10-13T21:24:55.154811481Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 255060,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-13T21:24:55.189048856Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:496f866fde942b6f85dc3d23428f27ed11a5cd69b522133c43b8dc97e7575c9e",
	        "ResolvConfPath": "/var/lib/docker/containers/9c2d73f7c225dd4d04d6f6b7096e5576996e7376dcf9fc374757ace0a40eb151/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9c2d73f7c225dd4d04d6f6b7096e5576996e7376dcf9fc374757ace0a40eb151/hostname",
	        "HostsPath": "/var/lib/docker/containers/9c2d73f7c225dd4d04d6f6b7096e5576996e7376dcf9fc374757ace0a40eb151/hosts",
	        "LogPath": "/var/lib/docker/containers/9c2d73f7c225dd4d04d6f6b7096e5576996e7376dcf9fc374757ace0a40eb151/9c2d73f7c225dd4d04d6f6b7096e5576996e7376dcf9fc374757ace0a40eb151-json.log",
	        "Name": "/functional-412292",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-412292:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-412292",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9c2d73f7c225dd4d04d6f6b7096e5576996e7376dcf9fc374757ace0a40eb151",
	                "LowerDir": "/var/lib/docker/overlay2/495caf2e22fef325f58028950b553dee729656778c80304ebd8bd6b21cc48994-init/diff:/var/lib/docker/overlay2/d6236b573fee274727d414fe7dfb0718c2f1a4b8ebed995b4196b3231a8d31a4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/495caf2e22fef325f58028950b553dee729656778c80304ebd8bd6b21cc48994/merged",
	                "UpperDir": "/var/lib/docker/overlay2/495caf2e22fef325f58028950b553dee729656778c80304ebd8bd6b21cc48994/diff",
	                "WorkDir": "/var/lib/docker/overlay2/495caf2e22fef325f58028950b553dee729656778c80304ebd8bd6b21cc48994/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-412292",
	                "Source": "/var/lib/docker/volumes/functional-412292/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-412292",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-412292",
	                "name.minikube.sigs.k8s.io": "functional-412292",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "38a1aa9d22cb1b867d1976fb0a5516a092f4860db87af1f45be093b43bb0947c",
	            "SandboxKey": "/var/run/docker/netns/38a1aa9d22cb",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-412292": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "26:38:c7:35:e6:60",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1c849e5d9857a2f632d2759ae464955e5c218f980b04b896620e44e957e80f7b",
	                    "EndpointID": "326b0ea7eca1d77200bff9acb3fa24a2a3c2af97152fe6cc844aaacb03f4a919",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-412292",
	                        "9c2d73f7c225"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-412292 -n functional-412292
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-412292 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-412292 logs -n 25: (1.337896552s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                   ARGS                                                    │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start          │ -p functional-412292 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio           │ functional-412292 │ jenkins │ v1.37.0 │ 13 Oct 25 21:27 UTC │                     │
	│ start          │ -p functional-412292 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio │ functional-412292 │ jenkins │ v1.37.0 │ 13 Oct 25 21:27 UTC │                     │
	│ ssh            │ functional-412292 ssh sudo cat /etc/ssl/certs/230929.pem                                                  │ functional-412292 │ jenkins │ v1.37.0 │ 13 Oct 25 21:27 UTC │ 13 Oct 25 21:27 UTC │
	│ dashboard      │ --url --port 36195 -p functional-412292 --alsologtostderr -v=1                                            │ functional-412292 │ jenkins │ v1.37.0 │ 13 Oct 25 21:27 UTC │ 13 Oct 25 21:27 UTC │
	│ ssh            │ functional-412292 ssh sudo cat /usr/share/ca-certificates/230929.pem                                      │ functional-412292 │ jenkins │ v1.37.0 │ 13 Oct 25 21:27 UTC │ 13 Oct 25 21:27 UTC │
	│ ssh            │ functional-412292 ssh sudo cat /etc/ssl/certs/51391683.0                                                  │ functional-412292 │ jenkins │ v1.37.0 │ 13 Oct 25 21:27 UTC │ 13 Oct 25 21:27 UTC │
	│ ssh            │ functional-412292 ssh sudo cat /etc/ssl/certs/2309292.pem                                                 │ functional-412292 │ jenkins │ v1.37.0 │ 13 Oct 25 21:27 UTC │ 13 Oct 25 21:27 UTC │
	│ ssh            │ functional-412292 ssh sudo cat /usr/share/ca-certificates/2309292.pem                                     │ functional-412292 │ jenkins │ v1.37.0 │ 13 Oct 25 21:27 UTC │ 13 Oct 25 21:27 UTC │
	│ ssh            │ functional-412292 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                  │ functional-412292 │ jenkins │ v1.37.0 │ 13 Oct 25 21:27 UTC │ 13 Oct 25 21:27 UTC │
	│ ssh            │ functional-412292 ssh sudo cat /etc/test/nested/copy/230929/hosts                                         │ functional-412292 │ jenkins │ v1.37.0 │ 13 Oct 25 21:27 UTC │ 13 Oct 25 21:27 UTC │
	│ image          │ functional-412292 image ls --format short --alsologtostderr                                               │ functional-412292 │ jenkins │ v1.37.0 │ 13 Oct 25 21:27 UTC │ 13 Oct 25 21:27 UTC │
	│ image          │ functional-412292 image ls --format yaml --alsologtostderr                                                │ functional-412292 │ jenkins │ v1.37.0 │ 13 Oct 25 21:27 UTC │ 13 Oct 25 21:27 UTC │
	│ ssh            │ functional-412292 ssh pgrep buildkitd                                                                     │ functional-412292 │ jenkins │ v1.37.0 │ 13 Oct 25 21:27 UTC │                     │
	│ image          │ functional-412292 image build -t localhost/my-image:functional-412292 testdata/build --alsologtostderr    │ functional-412292 │ jenkins │ v1.37.0 │ 13 Oct 25 21:27 UTC │ 13 Oct 25 21:27 UTC │
	│ image          │ functional-412292 image ls                                                                                │ functional-412292 │ jenkins │ v1.37.0 │ 13 Oct 25 21:27 UTC │ 13 Oct 25 21:27 UTC │
	│ image          │ functional-412292 image ls --format json --alsologtostderr                                                │ functional-412292 │ jenkins │ v1.37.0 │ 13 Oct 25 21:27 UTC │ 13 Oct 25 21:27 UTC │
	│ image          │ functional-412292 image ls --format table --alsologtostderr                                               │ functional-412292 │ jenkins │ v1.37.0 │ 13 Oct 25 21:27 UTC │ 13 Oct 25 21:27 UTC │
	│ update-context │ functional-412292 update-context --alsologtostderr -v=2                                                   │ functional-412292 │ jenkins │ v1.37.0 │ 13 Oct 25 21:27 UTC │ 13 Oct 25 21:27 UTC │
	│ update-context │ functional-412292 update-context --alsologtostderr -v=2                                                   │ functional-412292 │ jenkins │ v1.37.0 │ 13 Oct 25 21:27 UTC │ 13 Oct 25 21:27 UTC │
	│ update-context │ functional-412292 update-context --alsologtostderr -v=2                                                   │ functional-412292 │ jenkins │ v1.37.0 │ 13 Oct 25 21:27 UTC │ 13 Oct 25 21:27 UTC │
	│ service        │ functional-412292 service list                                                                            │ functional-412292 │ jenkins │ v1.37.0 │ 13 Oct 25 21:36 UTC │ 13 Oct 25 21:36 UTC │
	│ service        │ functional-412292 service list -o json                                                                    │ functional-412292 │ jenkins │ v1.37.0 │ 13 Oct 25 21:36 UTC │ 13 Oct 25 21:36 UTC │
	│ service        │ functional-412292 service --namespace=default --https --url hello-node                                    │ functional-412292 │ jenkins │ v1.37.0 │ 13 Oct 25 21:36 UTC │                     │
	│ service        │ functional-412292 service hello-node --url --format={{.IP}}                                               │ functional-412292 │ jenkins │ v1.37.0 │ 13 Oct 25 21:36 UTC │                     │
	│ service        │ functional-412292 service hello-node --url                                                                │ functional-412292 │ jenkins │ v1.37.0 │ 13 Oct 25 21:36 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 21:27:19
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 21:27:19.398134  269250 out.go:360] Setting OutFile to fd 1 ...
	I1013 21:27:19.398277  269250 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:27:19.398291  269250 out.go:374] Setting ErrFile to fd 2...
	I1013 21:27:19.398298  269250 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:27:19.398774  269250 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-226873/.minikube/bin
	I1013 21:27:19.399465  269250 out.go:368] Setting JSON to false
	I1013 21:27:19.400801  269250 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4187,"bootTime":1760386652,"procs":240,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1013 21:27:19.400957  269250 start.go:141] virtualization: kvm guest
	I1013 21:27:19.403044  269250 out.go:179] * [functional-412292] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1013 21:27:19.404349  269250 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 21:27:19.404349  269250 notify.go:220] Checking for updates...
	I1013 21:27:19.407173  269250 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 21:27:19.408617  269250 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-226873/kubeconfig
	I1013 21:27:19.412531  269250 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-226873/.minikube
	I1013 21:27:19.413801  269250 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1013 21:27:19.415143  269250 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 21:27:19.416926  269250 config.go:182] Loaded profile config "functional-412292": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:27:19.417447  269250 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 21:27:19.444278  269250 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1013 21:27:19.444382  269250 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 21:27:19.510061  269250 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-10-13 21:27:19.497446178 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652166656 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1013 21:27:19.510218  269250 docker.go:318] overlay module found
	I1013 21:27:19.512414  269250 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1013 21:27:19.513778  269250 start.go:305] selected driver: docker
	I1013 21:27:19.513797  269250 start.go:925] validating driver "docker" against &{Name:functional-412292 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-412292 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 21:27:19.513921  269250 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 21:27:19.515965  269250 out.go:203] 
	W1013 21:27:19.517631  269250 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1013 21:27:19.518903  269250 out.go:203] 
	
	
	==> CRI-O <==
	Oct 13 21:27:26 functional-412292 crio[3567]: time="2025-10-13T21:27:26.279794009Z" level=info msg="Starting container: 8294c5efd4005524667e5178a1262c1cca9accda9e3f27646b3697ddc1484d95" id=50eba1c9-ff49-4a61-b778-171fa1310cec name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 21:27:26 functional-412292 crio[3567]: time="2025-10-13T21:27:26.281585742Z" level=info msg="Started container" PID=7067 containerID=8294c5efd4005524667e5178a1262c1cca9accda9e3f27646b3697ddc1484d95 description=kubernetes-dashboard/kubernetes-dashboard-855c9754f9-lmj2x/kubernetes-dashboard id=50eba1c9-ff49-4a61-b778-171fa1310cec name=/runtime.v1.RuntimeService/StartContainer sandboxID=0707b88369c000e0d66601fd98e99bc8a060bd697522f47a81d02fb88b048c25
	Oct 13 21:27:32 functional-412292 crio[3567]: time="2025-10-13T21:27:32.90324076Z" level=info msg="Pulled image: docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da" id=6656bf9c-e30c-4d61-9bc3-b9c446972270 name=/runtime.v1.ImageService/PullImage
	Oct 13 21:27:32 functional-412292 crio[3567]: time="2025-10-13T21:27:32.904022558Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=4c5c923d-8c9d-48fd-b7a8-458e7a27e776 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 21:27:32 functional-412292 crio[3567]: time="2025-10-13T21:27:32.906275993Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=660ed2fe-8bc3-4f80-8923-20f75913dce7 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 21:27:32 functional-412292 crio[3567]: time="2025-10-13T21:27:32.91189435Z" level=info msg="Creating container: default/mysql-5bb876957f-hhck9/mysql" id=1b9d86b3-b194-4cec-a5c4-18d82a693cd4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 21:27:32 functional-412292 crio[3567]: time="2025-10-13T21:27:32.913420311Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 21:27:32 functional-412292 crio[3567]: time="2025-10-13T21:27:32.919848559Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 21:27:32 functional-412292 crio[3567]: time="2025-10-13T21:27:32.920438897Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 21:27:32 functional-412292 crio[3567]: time="2025-10-13T21:27:32.952274854Z" level=info msg="Created container 79ab3067716f4c2e407c8dbd7c854ef0d6245756f2f2c9d755c7238da259e6d8: default/mysql-5bb876957f-hhck9/mysql" id=1b9d86b3-b194-4cec-a5c4-18d82a693cd4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 21:27:32 functional-412292 crio[3567]: time="2025-10-13T21:27:32.953128364Z" level=info msg="Starting container: 79ab3067716f4c2e407c8dbd7c854ef0d6245756f2f2c9d755c7238da259e6d8" id=43cb4788-03c2-40a1-a5f0-b50bc6f7dfce name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 21:27:32 functional-412292 crio[3567]: time="2025-10-13T21:27:32.955632362Z" level=info msg="Started container" PID=7444 containerID=79ab3067716f4c2e407c8dbd7c854ef0d6245756f2f2c9d755c7238da259e6d8 description=default/mysql-5bb876957f-hhck9/mysql id=43cb4788-03c2-40a1-a5f0-b50bc6f7dfce name=/runtime.v1.RuntimeService/StartContainer sandboxID=c9c5a9108139e92c51fe545b715e4c9c7874a82780dea51ec2f9c2aea95bb947
	Oct 13 21:27:38 functional-412292 crio[3567]: time="2025-10-13T21:27:38.906700269Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=2875c4f7-fba3-440c-b930-b3fb6db50ab0 name=/runtime.v1.ImageService/PullImage
	Oct 13 21:27:42 functional-412292 crio[3567]: time="2025-10-13T21:27:42.906677319Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=11f5e1c9-772d-45f8-aef8-6c40249ad146 name=/runtime.v1.ImageService/PullImage
	Oct 13 21:28:01 functional-412292 crio[3567]: time="2025-10-13T21:28:01.906418691Z" level=info msg="Stopping pod sandbox: aa033ff1c806428a500ed263d9b018dd34dfd4a51340e05e027909c94b4043b1" id=ba02c79e-0b8a-40f9-b940-ae1bd48fd7e9 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 13 21:28:01 functional-412292 crio[3567]: time="2025-10-13T21:28:01.906478754Z" level=info msg="Stopped pod sandbox (already stopped): aa033ff1c806428a500ed263d9b018dd34dfd4a51340e05e027909c94b4043b1" id=ba02c79e-0b8a-40f9-b940-ae1bd48fd7e9 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 13 21:28:01 functional-412292 crio[3567]: time="2025-10-13T21:28:01.906781023Z" level=info msg="Removing pod sandbox: aa033ff1c806428a500ed263d9b018dd34dfd4a51340e05e027909c94b4043b1" id=739578bc-f79f-4634-ae4f-20b692fd33cc name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 13 21:28:01 functional-412292 crio[3567]: time="2025-10-13T21:28:01.910705232Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 13 21:28:01 functional-412292 crio[3567]: time="2025-10-13T21:28:01.910788911Z" level=info msg="Removed pod sandbox: aa033ff1c806428a500ed263d9b018dd34dfd4a51340e05e027909c94b4043b1" id=739578bc-f79f-4634-ae4f-20b692fd33cc name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 13 21:28:23 functional-412292 crio[3567]: time="2025-10-13T21:28:23.907709869Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=04309036-2194-4715-a937-6db99713377a name=/runtime.v1.ImageService/PullImage
	Oct 13 21:28:24 functional-412292 crio[3567]: time="2025-10-13T21:28:24.907187919Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=4830e4ff-415d-4eae-8372-f25e8e71c4fe name=/runtime.v1.ImageService/PullImage
	Oct 13 21:29:46 functional-412292 crio[3567]: time="2025-10-13T21:29:46.907718373Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=d8f34262-1029-4587-a726-239ccdf3b861 name=/runtime.v1.ImageService/PullImage
	Oct 13 21:29:56 functional-412292 crio[3567]: time="2025-10-13T21:29:56.907598494Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=03d457bc-b2a1-4934-ba2a-69e1c8dfc974 name=/runtime.v1.ImageService/PullImage
	Oct 13 21:32:30 functional-412292 crio[3567]: time="2025-10-13T21:32:30.907556263Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=5bdd38de-5277-4c8d-9ee3-ef0fed88b4f6 name=/runtime.v1.ImageService/PullImage
	Oct 13 21:32:41 functional-412292 crio[3567]: time="2025-10-13T21:32:41.908063243Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=2585d432-eed3-4d26-ae1c-93929239c5e9 name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	79ab3067716f4       docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da                  9 minutes ago       Running             mysql                       0                   c9c5a9108139e       mysql-5bb876957f-hhck9                       default
	8294c5efd4005       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029         9 minutes ago       Running             kubernetes-dashboard        0                   0707b88369c00       kubernetes-dashboard-855c9754f9-lmj2x        kubernetes-dashboard
	f0febb24aa372       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   9 minutes ago       Running             dashboard-metrics-scraper   0                   e673ebe66199b       dashboard-metrics-scraper-77bf4d6c4c-47k2q   kubernetes-dashboard
	f34caa519ecc7       docker.io/library/nginx@sha256:35fabd32a7582bed5da0a40f41fd4984df7ddff32f81cd6be4614d07240ec115                  9 minutes ago       Running             myfrontend                  0                   4ce7ba8d7d7e8       sp-pod                                       default
	b18a77c7904cf       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998              9 minutes ago       Exited              mount-munger                0                   639b90dddddce       busybox-mount                                default
	d64369c8e4d25       docker.io/library/nginx@sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e                  10 minutes ago      Running             nginx                       0                   15abcd3f7c7bf       nginx-svc                                    default
	05893a077ac34       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 10 minutes ago      Running             storage-provisioner         2                   7a2fdf71278f1       storage-provisioner                          kube-system
	dbf0e60db475a       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                 10 minutes ago      Running             kube-apiserver              0                   f12810e344185       kube-apiserver-functional-412292             kube-system
	f09857f827829       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                 10 minutes ago      Running             kube-controller-manager     1                   a906376310cda       kube-controller-manager-functional-412292    kube-system
	f5a510b4fa9b1       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 10 minutes ago      Running             etcd                        1                   36a4541d9b48b       etcd-functional-412292                       kube-system
	4564677147d62       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                 10 minutes ago      Running             kube-scheduler              1                   0e79cad0bf448       kube-scheduler-functional-412292             kube-system
	ba19da822e0ad       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 11 minutes ago      Running             kindnet-cni                 1                   3bf8df2f23dcb       kindnet-r6tj7                                kube-system
	8653a86316406       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 11 minutes ago      Exited              storage-provisioner         1                   7a2fdf71278f1       storage-provisioner                          kube-system
	e7eda2c77d465       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 11 minutes ago      Running             coredns                     1                   822c3d6226653       coredns-66bc5c9577-q5p27                     kube-system
	02aa96e4c1c63       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                 11 minutes ago      Running             kube-proxy                  1                   537acb3b1b820       kube-proxy-kjct2                             kube-system
	b209f9a5ec7db       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 11 minutes ago      Exited              coredns                     0                   822c3d6226653       coredns-66bc5c9577-q5p27                     kube-system
	245ecfa1a7c02       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 11 minutes ago      Exited              kindnet-cni                 0                   3bf8df2f23dcb       kindnet-r6tj7                                kube-system
	a6edc4aa9ed13       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                 11 minutes ago      Exited              kube-proxy                  0                   537acb3b1b820       kube-proxy-kjct2                             kube-system
	15ca8a672504e       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                 11 minutes ago      Exited              kube-controller-manager     0                   a906376310cda       kube-controller-manager-functional-412292    kube-system
	6a5493ed38eab       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                 11 minutes ago      Exited              kube-scheduler              0                   0e79cad0bf448       kube-scheduler-functional-412292             kube-system
	65fc805a435a3       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 11 minutes ago      Exited              etcd                        0                   36a4541d9b48b       etcd-functional-412292                       kube-system
	
	
	==> coredns [b209f9a5ec7db1b7cf837847a0d2d2d603b1bd8349302751b932dd3411ad1e36] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:48541 - 19972 "HINFO IN 4476244809023432809.7567682534440183981. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.055741487s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [e7eda2c77d465faaef066d2c4bdc7c5f8c80919d9936a8388dc6141188bfa92f] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55006 - 43583 "HINFO IN 7032577427886146143.8542035232477199880. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.06364947s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               functional-412292
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-412292
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=18273cc699fc357fbf4f93654efe4966698a9f22
	                    minikube.k8s.io/name=functional-412292
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_13T21_25_08_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 13 Oct 2025 21:25:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-412292
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 13 Oct 2025 21:36:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 13 Oct 2025 21:36:45 +0000   Mon, 13 Oct 2025 21:25:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 13 Oct 2025 21:36:45 +0000   Mon, 13 Oct 2025 21:25:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 13 Oct 2025 21:36:45 +0000   Mon, 13 Oct 2025 21:25:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 13 Oct 2025 21:36:45 +0000   Mon, 13 Oct 2025 21:25:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-412292
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863444Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863444Ki
	  pods:               110
	System Info:
	  Machine ID:                 66f4c9907daa2bafbf78246f68ed04ba
	  System UUID:                93c46fcb-ee2e-45f7-b9fb-688db01a53a5
	  Boot ID:                    981b04c4-08c1-4321-af44-0fa889f5f1d8
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-r7zd2                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-7d85dfc575-jndv7           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-5bb876957f-hhck9                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     9m41s
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m50s
	  kube-system                 coredns-66bc5c9577-q5p27                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 etcd-functional-412292                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         11m
	  kube-system                 kindnet-r6tj7                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-functional-412292              250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-412292     200m (2%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-kjct2                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-functional-412292              100m (1%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-47k2q    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m42s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-lmj2x         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 11m                kube-proxy       
	  Normal  Starting                 10m                kube-proxy       
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node functional-412292 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node functional-412292 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x8 over 12m)  kubelet          Node functional-412292 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    11m                kubelet          Node functional-412292 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  11m                kubelet          Node functional-412292 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     11m                kubelet          Node functional-412292 status is now: NodeHasSufficientPID
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           11m                node-controller  Node functional-412292 event: Registered Node functional-412292 in Controller
	  Normal  NodeReady                11m                kubelet          Node functional-412292 status is now: NodeReady
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  11m (x9 over 11m)  kubelet          Node functional-412292 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node functional-412292 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node functional-412292 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           10m                node-controller  Node functional-412292 event: Registered Node functional-412292 in Controller
	
	
	==> dmesg <==
	[  +0.099627] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.027042] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.612816] kauditd_printk_skb: 47 callbacks suppressed
	[Oct13 21:21] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +1.026013] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +1.023870] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +1.023931] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +1.023844] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +1.023883] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +2.047734] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +4.031606] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +8.511203] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[ +16.382178] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[Oct13 21:22] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	
	
	==> etcd [65fc805a435a312760f8c0a816493e2c44a45ebe37c6c8da2a914fdae03bff6f] <==
	{"level":"warn","ts":"2025-10-13T21:25:04.568983Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45900","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:25:04.576717Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:25:04.584062Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:25:04.597852Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:25:04.604228Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:25:04.610502Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:25:04.667040Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46016","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-13T21:26:00.058419Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-13T21:26:00.058553Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-412292","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-10-13T21:26:00.058662Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-13T21:26:00.060218Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-13T21:26:00.060316Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-13T21:26:00.060352Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-10-13T21:26:00.060858Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-10-13T21:26:00.060873Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-10-13T21:26:00.060883Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-13T21:26:00.060931Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-13T21:26:00.060948Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-13T21:26:00.060807Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-13T21:26:00.061031Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-13T21:26:00.061044Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-13T21:26:00.064022Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-10-13T21:26:00.064131Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-13T21:26:00.064177Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-10-13T21:26:00.064193Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-412292","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [f5a510b4fa9b14baf2fb70f3c3bf3aadd96f9ce1ee4386ba831722493433c3a1] <==
	{"level":"warn","ts":"2025-10-13T21:26:22.531863Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:26:22.541106Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:26:22.547194Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:26:22.554279Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:26:22.561468Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:26:22.567861Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:26:22.581292Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:26:22.593175Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:26:22.601593Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:26:22.607798Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:26:22.614122Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:26:22.622725Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37056","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:26:22.630212Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:26:22.636737Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:26:22.643228Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:26:22.649792Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:26:22.657247Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:26:22.665279Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:26:22.675527Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:26:22.682931Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37170","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:26:22.689261Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:26:22.744279Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37218","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-13T21:36:22.236448Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1145}
	{"level":"info","ts":"2025-10-13T21:36:22.257260Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1145,"took":"20.399092ms","hash":94419866,"current-db-size-bytes":3428352,"current-db-size":"3.4 MB","current-db-size-in-use-bytes":1540096,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2025-10-13T21:36:22.257328Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":94419866,"revision":1145,"compact-revision":-1}
	
	
	==> kernel <==
	 21:37:02 up  1:19,  0 user,  load average: 0.42, 1.47, 21.34
	Linux functional-412292 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [245ecfa1a7c02eacf164bc0360ad8683b6c2a96ac6b4388a58f759012eff1ddb] <==
	I1013 21:25:13.537598       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1013 21:25:13.537974       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1013 21:25:13.538171       1 main.go:148] setting mtu 1500 for CNI 
	I1013 21:25:13.538195       1 main.go:178] kindnetd IP family: "ipv4"
	I1013 21:25:13.538223       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-13T21:25:13Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1013 21:25:13.833146       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1013 21:25:13.833207       1 controller.go:381] "Waiting for informer caches to sync"
	I1013 21:25:13.833222       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1013 21:25:13.833372       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1013 21:25:14.133775       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1013 21:25:14.133809       1 metrics.go:72] Registering metrics
	I1013 21:25:14.133884       1 controller.go:711] "Syncing nftables rules"
	I1013 21:25:23.833669       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 21:25:23.833745       1 main.go:301] handling current node
	I1013 21:25:33.841803       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 21:25:33.841857       1 main.go:301] handling current node
	I1013 21:25:43.837126       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 21:25:43.837157       1 main.go:301] handling current node
	
	
	==> kindnet [ba19da822e0ad07a6046807b56e38fdd270a4ae345882c34cc097500222e1295] <==
	I1013 21:35:00.256874       1 main.go:301] handling current node
	I1013 21:35:10.247713       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 21:35:10.247746       1 main.go:301] handling current node
	I1013 21:35:20.249078       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 21:35:20.249125       1 main.go:301] handling current node
	I1013 21:35:30.249423       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 21:35:30.249498       1 main.go:301] handling current node
	I1013 21:35:40.248177       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 21:35:40.248212       1 main.go:301] handling current node
	I1013 21:35:50.254131       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 21:35:50.254171       1 main.go:301] handling current node
	I1013 21:36:00.249242       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 21:36:00.249294       1 main.go:301] handling current node
	I1013 21:36:10.247850       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 21:36:10.247900       1 main.go:301] handling current node
	I1013 21:36:20.255198       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 21:36:20.255234       1 main.go:301] handling current node
	I1013 21:36:30.251580       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 21:36:30.251618       1 main.go:301] handling current node
	I1013 21:36:40.251309       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 21:36:40.251353       1 main.go:301] handling current node
	I1013 21:36:50.251391       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 21:36:50.251426       1 main.go:301] handling current node
	I1013 21:37:00.249129       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 21:37:00.249164       1 main.go:301] handling current node
	
	
	==> kube-apiserver [dbf0e60db475a7508679cbaf53c31cfde812e7db5545892b7d855261012a690f] <==
	I1013 21:26:24.127740       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1013 21:26:24.333852       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1013 21:26:24.334987       1 controller.go:667] quota admission added evaluator for: endpoints
	I1013 21:26:24.340577       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1013 21:26:24.773201       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1013 21:26:24.859656       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1013 21:26:24.871359       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1013 21:26:24.932556       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1013 21:26:24.938080       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1013 21:26:26.860779       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1013 21:26:50.505855       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.107.14.37"}
	I1013 21:26:54.278902       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.103.78.196"}
	I1013 21:26:57.067300       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.104.109.192"}
	I1013 21:27:00.707288       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.105.126.177"}
	E1013 21:27:11.658398       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:60098: use of closed network connection
	E1013 21:27:19.383669       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:58632: use of closed network connection
	I1013 21:27:20.418897       1 controller.go:667] quota admission added evaluator for: namespaces
	I1013 21:27:20.727514       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.107.232.1"}
	I1013 21:27:20.739705       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.102.80.24"}
	I1013 21:27:21.562715       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.103.199.235"}
	E1013 21:27:38.730810       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:37800: use of closed network connection
	E1013 21:27:39.471253       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:37810: use of closed network connection
	E1013 21:27:41.236224       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:37826: use of closed network connection
	E1013 21:27:42.697951       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:37838: use of closed network connection
	I1013 21:36:23.148687       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [15ca8a672504e94d6f7e39a5ec5a11f9645c55acc117f90dad4e905c1de4fdc9] <==
	I1013 21:25:12.050571       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1013 21:25:12.050584       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1013 21:25:12.050640       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1013 21:25:12.050659       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1013 21:25:12.050666       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1013 21:25:12.050642       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1013 21:25:12.050724       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1013 21:25:12.050745       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1013 21:25:12.050780       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1013 21:25:12.051044       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1013 21:25:12.052229       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1013 21:25:12.052329       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1013 21:25:12.052446       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1013 21:25:12.053402       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1013 21:25:12.053968       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1013 21:25:12.054544       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1013 21:25:12.054566       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1013 21:25:12.054626       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1013 21:25:12.054675       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1013 21:25:12.054683       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1013 21:25:12.054688       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1013 21:25:12.055742       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1013 21:25:12.061300       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="functional-412292" podCIDRs=["10.244.0.0/24"]
	I1013 21:25:12.067370       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 21:25:27.001225       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-controller-manager [f09857f827829f11a47ab23532c1b136546ee43271da90114eeb6c45fa1da68b] <==
	I1013 21:26:26.556271       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1013 21:26:26.556408       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1013 21:26:26.558380       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1013 21:26:26.561703       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1013 21:26:26.561746       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1013 21:26:26.568938       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1013 21:26:26.569014       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1013 21:26:26.569054       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1013 21:26:26.569059       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1013 21:26:26.569063       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1013 21:26:26.571268       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1013 21:26:26.572363       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1013 21:26:26.572481       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1013 21:26:26.572548       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-412292"
	I1013 21:26:26.572594       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1013 21:26:26.576863       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 21:26:26.581522       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 21:26:26.581553       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1013 21:26:26.581571       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	E1013 21:27:20.649110       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1013 21:27:20.654278       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1013 21:27:20.659608       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1013 21:27:20.659761       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1013 21:27:20.666124       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1013 21:27:20.670561       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [02aa96e4c1c63447c23d9c99eeff003c95140d66d6cfcbeca2754443ad60a144] <==
	E1013 21:25:49.879722       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-412292&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1013 21:25:51.231079       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-412292&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1013 21:25:53.907198       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-412292&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1013 21:25:59.455791       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-412292&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1013 21:26:17.874034       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-412292&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1013 21:26:42.479860       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1013 21:26:42.479890       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1013 21:26:42.479952       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1013 21:26:42.498780       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1013 21:26:42.498845       1 server_linux.go:132] "Using iptables Proxier"
	I1013 21:26:42.504015       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1013 21:26:42.504388       1 server.go:527] "Version info" version="v1.34.1"
	I1013 21:26:42.504420       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 21:26:42.505755       1 config.go:200] "Starting service config controller"
	I1013 21:26:42.505780       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1013 21:26:42.506326       1 config.go:106] "Starting endpoint slice config controller"
	I1013 21:26:42.506730       1 config.go:309] "Starting node config controller"
	I1013 21:26:42.506773       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1013 21:26:42.506783       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1013 21:26:42.506333       1 config.go:403] "Starting serviceCIDR config controller"
	I1013 21:26:42.507091       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1013 21:26:42.507050       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1013 21:26:42.606415       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1013 21:26:42.607628       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1013 21:26:42.607648       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [a6edc4aa9ed13a0c4fafe42d8fb957eea96adee1c4d2e6f98caf31d3ffe7a471] <==
	I1013 21:25:13.380380       1 server_linux.go:53] "Using iptables proxy"
	I1013 21:25:13.446900       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1013 21:25:13.547052       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1013 21:25:13.547102       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1013 21:25:13.547206       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1013 21:25:13.571137       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1013 21:25:13.571199       1 server_linux.go:132] "Using iptables Proxier"
	I1013 21:25:13.579793       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1013 21:25:13.580275       1 server.go:527] "Version info" version="v1.34.1"
	I1013 21:25:13.580362       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 21:25:13.582155       1 config.go:106] "Starting endpoint slice config controller"
	I1013 21:25:13.582201       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1013 21:25:13.582314       1 config.go:309] "Starting node config controller"
	I1013 21:25:13.582346       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1013 21:25:13.582578       1 config.go:200] "Starting service config controller"
	I1013 21:25:13.582592       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1013 21:25:13.582623       1 config.go:403] "Starting serviceCIDR config controller"
	I1013 21:25:13.582632       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1013 21:25:13.682589       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1013 21:25:13.682676       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1013 21:25:13.682690       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1013 21:25:13.682681       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [4564677147d629a114ecee755af29309abe1c09c30b1f744e29fbc600cd9a520] <==
	I1013 21:26:22.057668       1 serving.go:386] Generated self-signed cert in-memory
	I1013 21:26:23.168335       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1013 21:26:23.168359       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 21:26:23.172244       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1013 21:26:23.172259       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 21:26:23.172271       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1013 21:26:23.172277       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 21:26:23.172286       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1013 21:26:23.172306       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1013 21:26:23.172660       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1013 21:26:23.172885       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1013 21:26:23.272414       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1013 21:26:23.272454       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1013 21:26:23.272429       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [6a5493ed38eab8568bb69ff382151b8163d4d39739c655b244cd0739fd7d045d] <==
	E1013 21:25:05.075345       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1013 21:25:05.075438       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1013 21:25:05.075539       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1013 21:25:05.075552       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1013 21:25:05.075723       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1013 21:25:05.075790       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1013 21:25:05.895496       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1013 21:25:05.937054       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1013 21:25:06.034764       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1013 21:25:06.034785       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1013 21:25:06.049071       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1013 21:25:06.069461       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1013 21:25:06.162063       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1013 21:25:06.180563       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1013 21:25:06.211616       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1013 21:25:06.264779       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1013 21:25:06.325374       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1013 21:25:06.326201       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	I1013 21:25:08.770491       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 21:25:59.949118       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1013 21:25:59.949133       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 21:25:59.949269       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1013 21:25:59.949294       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1013 21:25:59.949331       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1013 21:25:59.949360       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 13 21:34:22 functional-412292 kubelet[4106]: E1013 21:34:22.906333    4106 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-r7zd2" podUID="56d3620e-279b-4e4f-977c-0b9f5cf33c11"
	Oct 13 21:34:32 functional-412292 kubelet[4106]: E1013 21:34:32.907122    4106 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-jndv7" podUID="4ec0866c-b1c3-4c49-9b43-5e827f6c24df"
	Oct 13 21:34:34 functional-412292 kubelet[4106]: E1013 21:34:34.906704    4106 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-r7zd2" podUID="56d3620e-279b-4e4f-977c-0b9f5cf33c11"
	Oct 13 21:34:47 functional-412292 kubelet[4106]: E1013 21:34:47.907095    4106 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-r7zd2" podUID="56d3620e-279b-4e4f-977c-0b9f5cf33c11"
	Oct 13 21:34:47 functional-412292 kubelet[4106]: E1013 21:34:47.907279    4106 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-jndv7" podUID="4ec0866c-b1c3-4c49-9b43-5e827f6c24df"
	Oct 13 21:34:59 functional-412292 kubelet[4106]: E1013 21:34:59.906325    4106 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-jndv7" podUID="4ec0866c-b1c3-4c49-9b43-5e827f6c24df"
	Oct 13 21:35:00 functional-412292 kubelet[4106]: E1013 21:35:00.907384    4106 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-r7zd2" podUID="56d3620e-279b-4e4f-977c-0b9f5cf33c11"
	Oct 13 21:35:11 functional-412292 kubelet[4106]: E1013 21:35:11.907265    4106 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-jndv7" podUID="4ec0866c-b1c3-4c49-9b43-5e827f6c24df"
	Oct 13 21:35:14 functional-412292 kubelet[4106]: E1013 21:35:14.906573    4106 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-r7zd2" podUID="56d3620e-279b-4e4f-977c-0b9f5cf33c11"
	Oct 13 21:35:23 functional-412292 kubelet[4106]: E1013 21:35:23.907437    4106 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-jndv7" podUID="4ec0866c-b1c3-4c49-9b43-5e827f6c24df"
	Oct 13 21:35:27 functional-412292 kubelet[4106]: E1013 21:35:27.907042    4106 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-r7zd2" podUID="56d3620e-279b-4e4f-977c-0b9f5cf33c11"
	Oct 13 21:35:36 functional-412292 kubelet[4106]: E1013 21:35:36.907211    4106 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-jndv7" podUID="4ec0866c-b1c3-4c49-9b43-5e827f6c24df"
	Oct 13 21:35:39 functional-412292 kubelet[4106]: E1013 21:35:39.907461    4106 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-r7zd2" podUID="56d3620e-279b-4e4f-977c-0b9f5cf33c11"
	Oct 13 21:35:50 functional-412292 kubelet[4106]: E1013 21:35:50.907265    4106 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-jndv7" podUID="4ec0866c-b1c3-4c49-9b43-5e827f6c24df"
	Oct 13 21:35:51 functional-412292 kubelet[4106]: E1013 21:35:51.906875    4106 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-r7zd2" podUID="56d3620e-279b-4e4f-977c-0b9f5cf33c11"
	Oct 13 21:36:04 functional-412292 kubelet[4106]: E1013 21:36:04.906253    4106 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-r7zd2" podUID="56d3620e-279b-4e4f-977c-0b9f5cf33c11"
	Oct 13 21:36:04 functional-412292 kubelet[4106]: E1013 21:36:04.906273    4106 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-jndv7" podUID="4ec0866c-b1c3-4c49-9b43-5e827f6c24df"
	Oct 13 21:36:15 functional-412292 kubelet[4106]: E1013 21:36:15.907347    4106 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-r7zd2" podUID="56d3620e-279b-4e4f-977c-0b9f5cf33c11"
	Oct 13 21:36:16 functional-412292 kubelet[4106]: E1013 21:36:16.907271    4106 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-jndv7" podUID="4ec0866c-b1c3-4c49-9b43-5e827f6c24df"
	Oct 13 21:36:28 functional-412292 kubelet[4106]: E1013 21:36:28.907144    4106 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-jndv7" podUID="4ec0866c-b1c3-4c49-9b43-5e827f6c24df"
	Oct 13 21:36:29 functional-412292 kubelet[4106]: E1013 21:36:29.906964    4106 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-r7zd2" podUID="56d3620e-279b-4e4f-977c-0b9f5cf33c11"
	Oct 13 21:36:43 functional-412292 kubelet[4106]: E1013 21:36:43.906894    4106 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-jndv7" podUID="4ec0866c-b1c3-4c49-9b43-5e827f6c24df"
	Oct 13 21:36:44 functional-412292 kubelet[4106]: E1013 21:36:44.907281    4106 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-r7zd2" podUID="56d3620e-279b-4e4f-977c-0b9f5cf33c11"
	Oct 13 21:36:54 functional-412292 kubelet[4106]: E1013 21:36:54.907276    4106 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-jndv7" podUID="4ec0866c-b1c3-4c49-9b43-5e827f6c24df"
	Oct 13 21:36:57 functional-412292 kubelet[4106]: E1013 21:36:57.906924    4106 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-r7zd2" podUID="56d3620e-279b-4e4f-977c-0b9f5cf33c11"
	
	
	==> kubernetes-dashboard [8294c5efd4005524667e5178a1262c1cca9accda9e3f27646b3697ddc1484d95] <==
	2025/10/13 21:27:26 Starting overwatch
	2025/10/13 21:27:26 Using namespace: kubernetes-dashboard
	2025/10/13 21:27:26 Using in-cluster config to connect to apiserver
	2025/10/13 21:27:26 Using secret token for csrf signing
	2025/10/13 21:27:26 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/13 21:27:26 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/13 21:27:26 Successful initial request to the apiserver, version: v1.34.1
	2025/10/13 21:27:26 Generating JWE encryption key
	2025/10/13 21:27:26 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/13 21:27:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/13 21:27:26 Initializing JWE encryption key from synchronized object
	2025/10/13 21:27:26 Creating in-cluster Sidecar client
	2025/10/13 21:27:26 Successful request to sidecar
	2025/10/13 21:27:26 Serving insecurely on HTTP port: 9090
	
	
	==> storage-provisioner [05893a077ac3400808cee8f7fc6aa21077ab407f706c0ccd2d64e2f30aecfcea] <==
	W1013 21:36:37.027815       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:36:39.031181       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:36:39.035176       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:36:41.038231       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:36:41.042361       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:36:43.045614       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:36:43.051050       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:36:45.054209       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:36:45.058116       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:36:47.061454       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:36:47.066794       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:36:49.070374       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:36:49.075042       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:36:51.078895       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:36:51.083252       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:36:53.086487       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:36:53.091647       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:36:55.094967       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:36:55.100340       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:36:57.103457       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:36:57.107697       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:36:59.110841       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:36:59.115019       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:37:01.118844       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:37:01.122796       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [8653a8631640612ad2e2df1a18955e02fd2848e13838b150c737906a81a92e49] <==
	I1013 21:25:49.781020       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1013 21:25:49.782887       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-412292 -n functional-412292
helpers_test.go:269: (dbg) Run:  kubectl --context functional-412292 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-r7zd2 hello-node-connect-7d85dfc575-jndv7
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-412292 describe pod busybox-mount hello-node-75c85bcc94-r7zd2 hello-node-connect-7d85dfc575-jndv7
helpers_test.go:290: (dbg) kubectl --context functional-412292 describe pod busybox-mount hello-node-75c85bcc94-r7zd2 hello-node-connect-7d85dfc575-jndv7:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-412292/192.168.49.2
	Start Time:       Mon, 13 Oct 2025 21:27:11 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  mount-munger:
	    Container ID:  cri-o://b18a77c7904cf4d418b969a10d614a35815e967a32a9c19b1cded6434b974514
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 13 Oct 2025 21:27:12 +0000
	      Finished:     Mon, 13 Oct 2025 21:27:12 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dgvk9 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-dgvk9:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  9m52s  default-scheduler  Successfully assigned default/busybox-mount to functional-412292
	  Normal  Pulling    9m52s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     9m51s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 732ms (732ms including waiting). Image size: 4631262 bytes.
	  Normal  Created    9m51s  kubelet            Created container: mount-munger
	  Normal  Started    9m51s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-r7zd2
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-412292/192.168.49.2
	Start Time:       Mon, 13 Oct 2025 21:26:54 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:           10.244.0.4
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7dtw8 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-7dtw8:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-75c85bcc94-r7zd2 to functional-412292
	  Normal   Pulling    7m17s (x5 over 10m)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m17s (x5 over 10m)  kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m17s (x5 over 10m)  kubelet            Error: ErrImagePull
	  Normal   BackOff    6s (x42 over 10m)    kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     6s (x42 over 10m)    kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-jndv7
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-412292/192.168.49.2
	Start Time:       Mon, 13 Oct 2025 21:27:00 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:           10.244.0.7
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-x8nkn (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-x8nkn:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-jndv7 to functional-412292
	  Normal   Pulling    7m7s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m7s (x5 over 9m58s)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m7s (x5 over 9m58s)    kubelet            Error: ErrImagePull
	  Warning  Failed     4m49s (x20 over 9m58s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m36s (x21 over 9m58s)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (602.99s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-412292 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-412292 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-r7zd2" [56d3620e-279b-4e4f-977c-0b9f5cf33c11] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-412292 -n functional-412292
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-10-13 21:36:54.602861288 +0000 UTC m=+1116.864939994
functional_test.go:1460: (dbg) Run:  kubectl --context functional-412292 describe po hello-node-75c85bcc94-r7zd2 -n default
functional_test.go:1460: (dbg) kubectl --context functional-412292 describe po hello-node-75c85bcc94-r7zd2 -n default:
Name:             hello-node-75c85bcc94-r7zd2
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-412292/192.168.49.2
Start Time:       Mon, 13 Oct 2025 21:26:54 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.4
IPs:
IP:           10.244.0.4
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7dtw8 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-7dtw8:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-75c85bcc94-r7zd2 to functional-412292
Normal   Pulling    7m8s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m8s (x5 over 10m)      kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m8s (x5 over 10m)      kubelet            Error: ErrImagePull
Warning  Failed     4m51s (x20 over 9m59s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m38s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1460: (dbg) Run:  kubectl --context functional-412292 logs hello-node-75c85bcc94-r7zd2 -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-412292 logs hello-node-75c85bcc94-r7zd2 -n default: exit status 1 (71.959894ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-r7zd2" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-412292 logs hello-node-75c85bcc94-r7zd2 -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (2.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-412292 image ls --format short --alsologtostderr
functional_test.go:276: (dbg) Done: out/minikube-linux-amd64 -p functional-412292 image ls --format short --alsologtostderr: (2.27174285s)
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-412292 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-412292 image ls --format short --alsologtostderr:
I1013 21:27:29.247797  270722 out.go:360] Setting OutFile to fd 1 ...
I1013 21:27:29.247937  270722 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1013 21:27:29.247954  270722 out.go:374] Setting ErrFile to fd 2...
I1013 21:27:29.247966  270722 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1013 21:27:29.248166  270722 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-226873/.minikube/bin
I1013 21:27:29.248800  270722 config.go:182] Loaded profile config "functional-412292": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1013 21:27:29.248922  270722 config.go:182] Loaded profile config "functional-412292": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1013 21:27:29.249318  270722 cli_runner.go:164] Run: docker container inspect functional-412292 --format={{.State.Status}}
I1013 21:27:29.272425  270722 ssh_runner.go:195] Run: systemctl --version
I1013 21:27:29.272575  270722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-412292
I1013 21:27:29.296668  270722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/functional-412292/id_rsa Username:docker}
I1013 21:27:29.408481  270722 ssh_runner.go:195] Run: sudo crictl images --output json
I1013 21:27:31.446264  270722 ssh_runner.go:235] Completed: sudo crictl images --output json: (2.037690896s)
W1013 21:27:31.446370  270722 cache_images.go:735] Failed to list images for profile functional-412292 crictl images: sudo crictl images --output json: Process exited with status 1
stdout:

                                                
                                                
stderr:
E1013 21:27:31.443063    7248 log.go:32] "ListImages with filter from image service failed" err="rpc error: code = DeadlineExceeded desc = stream terminated by RST_STREAM with error code: CANCEL" filter="image:{}"
time="2025-10-13T21:27:31Z" level=fatal msg="listing images: rpc error: code = DeadlineExceeded desc = stream terminated by RST_STREAM with error code: CANCEL"
functional_test.go:290: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (2.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-412292 image load --daemon kicbase/echo-server:functional-412292 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-412292 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-412292" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-412292 image load --daemon kicbase/echo-server:functional-412292 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-412292 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-412292" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.94s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-412292
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-412292 image load --daemon kicbase/echo-server:functional-412292 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-412292 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-412292" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-412292 image save kicbase/echo-server:functional-412292 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-412292 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1013 21:26:59.923575  265335 out.go:360] Setting OutFile to fd 1 ...
	I1013 21:26:59.923886  265335 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:26:59.923896  265335 out.go:374] Setting ErrFile to fd 2...
	I1013 21:26:59.923900  265335 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:26:59.924157  265335 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-226873/.minikube/bin
	I1013 21:26:59.924856  265335 config.go:182] Loaded profile config "functional-412292": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:26:59.924942  265335 config.go:182] Loaded profile config "functional-412292": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:26:59.925382  265335 cli_runner.go:164] Run: docker container inspect functional-412292 --format={{.State.Status}}
	I1013 21:26:59.942334  265335 ssh_runner.go:195] Run: systemctl --version
	I1013 21:26:59.942409  265335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-412292
	I1013 21:26:59.959455  265335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/functional-412292/id_rsa Username:docker}
	I1013 21:27:00.055901  265335 cache_images.go:290] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar
	W1013 21:27:00.055958  265335 cache_images.go:254] Failed to load cached images for "functional-412292": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar: no such file or directory
	I1013 21:27:00.055978  265335 cache_images.go:266] failed pushing to: functional-412292

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-412292
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-412292 image save --daemon kicbase/echo-server:functional-412292 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-412292
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-412292: exit status 1 (19.193082ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-412292

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-412292

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-412292 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-412292 service --namespace=default --https --url hello-node: exit status 115 (526.785333ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:31087
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-412292 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-412292 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-412292 service hello-node --url --format={{.IP}}: exit status 115 (529.862972ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-412292 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-412292 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-412292 service hello-node --url: exit status 115 (527.614818ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:31087
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-412292 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31087
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.53s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (2.4s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-264006 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p json-output-264006 --output=json --user=testUser: exit status 80 (2.404189566s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"dc43f0b0-69e8-4c33-8fa3-3818ff02adfa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-264006 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"6deaa879-fdbe-472c-9230-7e009afa11dd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-10-13T21:46:23Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"2280e70a-62cf-4bb1-b38a-914ef9454609","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 pause -p json-output-264006 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (2.40s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.58s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-264006 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 unpause -p json-output-264006 --output=json --user=testUser: exit status 80 (1.575331208s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4f76d223-d405-4301-a3ed-add5f0efff7f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-264006 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"621bdf2b-6a55-4826-9696-b35c3201f0a1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-10-13T21:46:24Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"f9757027-7fd3-4514-8db7-2993e1d58c57","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 unpause -p json-output-264006 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.58s)

                                                
                                    
x
+
TestPause/serial/Pause (6.2s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-253311 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p pause-253311 --alsologtostderr -v=5: exit status 80 (2.523887299s)

                                                
                                                
-- stdout --
	* Pausing node pause-253311 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 22:00:03.743772  433171 out.go:360] Setting OutFile to fd 1 ...
	I1013 22:00:03.744078  433171 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:00:03.744089  433171 out.go:374] Setting ErrFile to fd 2...
	I1013 22:00:03.744096  433171 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:00:03.744432  433171 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-226873/.minikube/bin
	I1013 22:00:03.744762  433171 out.go:368] Setting JSON to false
	I1013 22:00:03.744790  433171 mustload.go:65] Loading cluster: pause-253311
	I1013 22:00:03.745387  433171 config.go:182] Loaded profile config "pause-253311": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:00:03.746003  433171 cli_runner.go:164] Run: docker container inspect pause-253311 --format={{.State.Status}}
	I1013 22:00:03.769689  433171 host.go:66] Checking if "pause-253311" exists ...
	I1013 22:00:03.770061  433171 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 22:00:03.842633  433171 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:80 OomKillDisable:false NGoroutines:87 SystemTime:2025-10-13 22:00:03.830371185 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652166656 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1013 22:00:03.843555  433171 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1758198818-20370/minikube-v1.37.0-1758198818-20370-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1758198818-20370-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-253311 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1013 22:00:03.845583  433171 out.go:179] * Pausing node pause-253311 ... 
	I1013 22:00:03.846883  433171 host.go:66] Checking if "pause-253311" exists ...
	I1013 22:00:03.847294  433171 ssh_runner.go:195] Run: systemctl --version
	I1013 22:00:03.847342  433171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-253311
	I1013 22:00:03.867491  433171 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33013 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/pause-253311/id_rsa Username:docker}
	I1013 22:00:03.973321  433171 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 22:00:03.988744  433171 pause.go:52] kubelet running: true
	I1013 22:00:03.988822  433171 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1013 22:00:04.146404  433171 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1013 22:00:04.146534  433171 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1013 22:00:04.232437  433171 cri.go:89] found id: "72e4da29ff2020997167660dbf5e577efbcb79f4cf72f7406a5a3d592c3753d8"
	I1013 22:00:04.232465  433171 cri.go:89] found id: "2631ab18f564077ad577f6e26dabadd22f99644287d0fc7fd642f0231e2fb504"
	I1013 22:00:04.232471  433171 cri.go:89] found id: "3ff5f4fa83e6bdb9765489cb27d134699de2c145759b39c1ebb50b69637599e4"
	I1013 22:00:04.232476  433171 cri.go:89] found id: "9f36be9b9e29aac5bebbf06e4ae167def223bbd61742d5ff1d3ae7b5b075414d"
	I1013 22:00:04.232481  433171 cri.go:89] found id: "72a7985084091f5ff6e2d6afc41a7b849fe6f6bb0d7e00caad660ce2d8be6fae"
	I1013 22:00:04.232486  433171 cri.go:89] found id: "ae87bf0613601bc98b12e73093e02e301d4f94c0099edc7e8d9a3dfc637ea701"
	I1013 22:00:04.232490  433171 cri.go:89] found id: "ffd8882a9dc8994ddf3be9937257ffa6da74907717ea8751def522f9a2473b89"
	I1013 22:00:04.232494  433171 cri.go:89] found id: ""
	I1013 22:00:04.232541  433171 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 22:00:04.248503  433171 retry.go:31] will retry after 346.036783ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:00:04Z" level=error msg="open /run/runc: no such file or directory"
	I1013 22:00:04.595049  433171 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 22:00:04.610194  433171 pause.go:52] kubelet running: false
	I1013 22:00:04.610255  433171 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1013 22:00:04.765798  433171 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1013 22:00:04.765900  433171 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1013 22:00:04.837601  433171 cri.go:89] found id: "72e4da29ff2020997167660dbf5e577efbcb79f4cf72f7406a5a3d592c3753d8"
	I1013 22:00:04.837625  433171 cri.go:89] found id: "2631ab18f564077ad577f6e26dabadd22f99644287d0fc7fd642f0231e2fb504"
	I1013 22:00:04.837630  433171 cri.go:89] found id: "3ff5f4fa83e6bdb9765489cb27d134699de2c145759b39c1ebb50b69637599e4"
	I1013 22:00:04.837634  433171 cri.go:89] found id: "9f36be9b9e29aac5bebbf06e4ae167def223bbd61742d5ff1d3ae7b5b075414d"
	I1013 22:00:04.837638  433171 cri.go:89] found id: "72a7985084091f5ff6e2d6afc41a7b849fe6f6bb0d7e00caad660ce2d8be6fae"
	I1013 22:00:04.837642  433171 cri.go:89] found id: "ae87bf0613601bc98b12e73093e02e301d4f94c0099edc7e8d9a3dfc637ea701"
	I1013 22:00:04.837646  433171 cri.go:89] found id: "ffd8882a9dc8994ddf3be9937257ffa6da74907717ea8751def522f9a2473b89"
	I1013 22:00:04.837649  433171 cri.go:89] found id: ""
	I1013 22:00:04.837699  433171 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 22:00:04.849961  433171 retry.go:31] will retry after 538.344598ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:00:04Z" level=error msg="open /run/runc: no such file or directory"
	I1013 22:00:05.388725  433171 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 22:00:05.402258  433171 pause.go:52] kubelet running: false
	I1013 22:00:05.402311  433171 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1013 22:00:05.508987  433171 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1013 22:00:05.509123  433171 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1013 22:00:05.577408  433171 cri.go:89] found id: "72e4da29ff2020997167660dbf5e577efbcb79f4cf72f7406a5a3d592c3753d8"
	I1013 22:00:05.577434  433171 cri.go:89] found id: "2631ab18f564077ad577f6e26dabadd22f99644287d0fc7fd642f0231e2fb504"
	I1013 22:00:05.577438  433171 cri.go:89] found id: "3ff5f4fa83e6bdb9765489cb27d134699de2c145759b39c1ebb50b69637599e4"
	I1013 22:00:05.577441  433171 cri.go:89] found id: "9f36be9b9e29aac5bebbf06e4ae167def223bbd61742d5ff1d3ae7b5b075414d"
	I1013 22:00:05.577444  433171 cri.go:89] found id: "72a7985084091f5ff6e2d6afc41a7b849fe6f6bb0d7e00caad660ce2d8be6fae"
	I1013 22:00:05.577447  433171 cri.go:89] found id: "ae87bf0613601bc98b12e73093e02e301d4f94c0099edc7e8d9a3dfc637ea701"
	I1013 22:00:05.577449  433171 cri.go:89] found id: "ffd8882a9dc8994ddf3be9937257ffa6da74907717ea8751def522f9a2473b89"
	I1013 22:00:05.577451  433171 cri.go:89] found id: ""
	I1013 22:00:05.577490  433171 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 22:00:05.590072  433171 retry.go:31] will retry after 335.789261ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:00:05Z" level=error msg="open /run/runc: no such file or directory"
	I1013 22:00:05.926419  433171 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 22:00:05.943699  433171 pause.go:52] kubelet running: false
	I1013 22:00:05.943763  433171 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1013 22:00:06.092918  433171 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1013 22:00:06.093055  433171 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1013 22:00:06.179046  433171 cri.go:89] found id: "72e4da29ff2020997167660dbf5e577efbcb79f4cf72f7406a5a3d592c3753d8"
	I1013 22:00:06.179076  433171 cri.go:89] found id: "2631ab18f564077ad577f6e26dabadd22f99644287d0fc7fd642f0231e2fb504"
	I1013 22:00:06.179082  433171 cri.go:89] found id: "3ff5f4fa83e6bdb9765489cb27d134699de2c145759b39c1ebb50b69637599e4"
	I1013 22:00:06.179087  433171 cri.go:89] found id: "9f36be9b9e29aac5bebbf06e4ae167def223bbd61742d5ff1d3ae7b5b075414d"
	I1013 22:00:06.179091  433171 cri.go:89] found id: "72a7985084091f5ff6e2d6afc41a7b849fe6f6bb0d7e00caad660ce2d8be6fae"
	I1013 22:00:06.179095  433171 cri.go:89] found id: "ae87bf0613601bc98b12e73093e02e301d4f94c0099edc7e8d9a3dfc637ea701"
	I1013 22:00:06.179098  433171 cri.go:89] found id: "ffd8882a9dc8994ddf3be9937257ffa6da74907717ea8751def522f9a2473b89"
	I1013 22:00:06.179102  433171 cri.go:89] found id: ""
	I1013 22:00:06.179152  433171 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 22:00:06.195289  433171 out.go:203] 
	W1013 22:00:06.196470  433171 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:00:06Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:00:06Z" level=error msg="open /run/runc: no such file or directory"
	
	W1013 22:00:06.196490  433171 out.go:285] * 
	* 
	W1013 22:00:06.202368  433171 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1013 22:00:06.203803  433171 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-amd64 pause -p pause-253311 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-253311
helpers_test.go:243: (dbg) docker inspect pause-253311:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4e4d367348aca20f97928fd20719201b64a649387db18ea88ca714397866d296",
	        "Created": "2025-10-13T21:59:20.119651023Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 421609,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-13T21:59:20.179825386Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:496f866fde942b6f85dc3d23428f27ed11a5cd69b522133c43b8dc97e7575c9e",
	        "ResolvConfPath": "/var/lib/docker/containers/4e4d367348aca20f97928fd20719201b64a649387db18ea88ca714397866d296/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4e4d367348aca20f97928fd20719201b64a649387db18ea88ca714397866d296/hostname",
	        "HostsPath": "/var/lib/docker/containers/4e4d367348aca20f97928fd20719201b64a649387db18ea88ca714397866d296/hosts",
	        "LogPath": "/var/lib/docker/containers/4e4d367348aca20f97928fd20719201b64a649387db18ea88ca714397866d296/4e4d367348aca20f97928fd20719201b64a649387db18ea88ca714397866d296-json.log",
	        "Name": "/pause-253311",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-253311:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-253311",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4e4d367348aca20f97928fd20719201b64a649387db18ea88ca714397866d296",
	                "LowerDir": "/var/lib/docker/overlay2/4bb504327f68cecb30b0e80936621122e0430712a25881270dc23f11c14a8077-init/diff:/var/lib/docker/overlay2/d6236b573fee274727d414fe7dfb0718c2f1a4b8ebed995b4196b3231a8d31a4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4bb504327f68cecb30b0e80936621122e0430712a25881270dc23f11c14a8077/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4bb504327f68cecb30b0e80936621122e0430712a25881270dc23f11c14a8077/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4bb504327f68cecb30b0e80936621122e0430712a25881270dc23f11c14a8077/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-253311",
	                "Source": "/var/lib/docker/volumes/pause-253311/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-253311",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-253311",
	                "name.minikube.sigs.k8s.io": "pause-253311",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e606821829bc6a7d95ff9e2de798390f4d2d906d84b003d3b2e49fb53c64606a",
	            "SandboxKey": "/var/run/docker/netns/e606821829bc",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33013"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33014"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33017"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33015"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33016"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-253311": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5e:be:17:7b:37:b5",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "73e06fdbd50b8913868550c8d723017ebbbe250249a86c952e48794e194a630d",
	                    "EndpointID": "c73eb1aa2ee4ebb65485cd884f29562661c1a6ad9251274ca217dc361adb0bb5",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-253311",
	                        "4e4d367348ac"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-253311 -n pause-253311
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-253311 -n pause-253311: exit status 2 (332.320546ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-253311 logs -n 25
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p scheduled-stop-408315                                                                                                                 │ scheduled-stop-408315       │ jenkins │ v1.37.0 │ 13 Oct 25 21:57 UTC │ 13 Oct 25 21:57 UTC │
	│ start   │ -p insufficient-storage-240381 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio                         │ insufficient-storage-240381 │ jenkins │ v1.37.0 │ 13 Oct 25 21:57 UTC │                     │
	│ delete  │ -p insufficient-storage-240381                                                                                                           │ insufficient-storage-240381 │ jenkins │ v1.37.0 │ 13 Oct 25 21:57 UTC │ 13 Oct 25 21:57 UTC │
	│ start   │ -p kubernetes-upgrade-050146 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-050146   │ jenkins │ v1.37.0 │ 13 Oct 25 21:57 UTC │ 13 Oct 25 21:58 UTC │
	│ start   │ -p offline-crio-932435 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio                        │ offline-crio-932435         │ jenkins │ v1.37.0 │ 13 Oct 25 21:57 UTC │ 13 Oct 25 21:58 UTC │
	│ start   │ -p missing-upgrade-878493 --memory=3072 --driver=docker  --container-runtime=crio                                                        │ missing-upgrade-878493      │ jenkins │ v1.32.0 │ 13 Oct 25 21:57 UTC │ 13 Oct 25 21:58 UTC │
	│ start   │ -p stopped-upgrade-126916 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-126916      │ jenkins │ v1.32.0 │ 13 Oct 25 21:57 UTC │ 13 Oct 25 21:58 UTC │
	│ stop    │ -p kubernetes-upgrade-050146                                                                                                             │ kubernetes-upgrade-050146   │ jenkins │ v1.37.0 │ 13 Oct 25 21:58 UTC │ 13 Oct 25 21:58 UTC │
	│ stop    │ stopped-upgrade-126916 stop                                                                                                              │ stopped-upgrade-126916      │ jenkins │ v1.32.0 │ 13 Oct 25 21:58 UTC │ 13 Oct 25 21:58 UTC │
	│ start   │ -p kubernetes-upgrade-050146 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-050146   │ jenkins │ v1.37.0 │ 13 Oct 25 21:58 UTC │                     │
	│ start   │ -p missing-upgrade-878493 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-878493      │ jenkins │ v1.37.0 │ 13 Oct 25 21:58 UTC │ 13 Oct 25 21:59 UTC │
	│ start   │ -p stopped-upgrade-126916 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ stopped-upgrade-126916      │ jenkins │ v1.37.0 │ 13 Oct 25 21:58 UTC │ 13 Oct 25 21:59 UTC │
	│ delete  │ -p offline-crio-932435                                                                                                                   │ offline-crio-932435         │ jenkins │ v1.37.0 │ 13 Oct 25 21:58 UTC │ 13 Oct 25 21:58 UTC │
	│ start   │ -p running-upgrade-850760 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ running-upgrade-850760      │ jenkins │ v1.32.0 │ 13 Oct 25 21:58 UTC │ 13 Oct 25 21:59 UTC │
	│ delete  │ -p stopped-upgrade-126916                                                                                                                │ stopped-upgrade-126916      │ jenkins │ v1.37.0 │ 13 Oct 25 21:59 UTC │ 13 Oct 25 21:59 UTC │
	│ start   │ -p pause-253311 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-253311                │ jenkins │ v1.37.0 │ 13 Oct 25 21:59 UTC │ 13 Oct 25 21:59 UTC │
	│ start   │ -p running-upgrade-850760 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ running-upgrade-850760      │ jenkins │ v1.37.0 │ 13 Oct 25 21:59 UTC │ 13 Oct 25 21:59 UTC │
	│ delete  │ -p missing-upgrade-878493                                                                                                                │ missing-upgrade-878493      │ jenkins │ v1.37.0 │ 13 Oct 25 21:59 UTC │ 13 Oct 25 21:59 UTC │
	│ start   │ -p NoKubernetes-686990 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                            │ NoKubernetes-686990         │ jenkins │ v1.37.0 │ 13 Oct 25 21:59 UTC │                     │
	│ start   │ -p NoKubernetes-686990 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                    │ NoKubernetes-686990         │ jenkins │ v1.37.0 │ 13 Oct 25 21:59 UTC │ 13 Oct 25 21:59 UTC │
	│ delete  │ -p running-upgrade-850760                                                                                                                │ running-upgrade-850760      │ jenkins │ v1.37.0 │ 13 Oct 25 21:59 UTC │ 13 Oct 25 21:59 UTC │
	│ start   │ -p force-systemd-flag-886102 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio              │ force-systemd-flag-886102   │ jenkins │ v1.37.0 │ 13 Oct 25 21:59 UTC │                     │
	│ start   │ -p NoKubernetes-686990 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-686990         │ jenkins │ v1.37.0 │ 13 Oct 25 21:59 UTC │                     │
	│ start   │ -p pause-253311 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-253311                │ jenkins │ v1.37.0 │ 13 Oct 25 21:59 UTC │ 13 Oct 25 22:00 UTC │
	│ pause   │ -p pause-253311 --alsologtostderr -v=5                                                                                                   │ pause-253311                │ jenkins │ v1.37.0 │ 13 Oct 25 22:00 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 21:59:57
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 21:59:57.779758  431667 out.go:360] Setting OutFile to fd 1 ...
	I1013 21:59:57.779851  431667 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:59:57.779856  431667 out.go:374] Setting ErrFile to fd 2...
	I1013 21:59:57.779860  431667 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:59:57.780112  431667 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-226873/.minikube/bin
	I1013 21:59:57.780643  431667 out.go:368] Setting JSON to false
	I1013 21:59:57.781922  431667 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6146,"bootTime":1760386652,"procs":463,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1013 21:59:57.782066  431667 start.go:141] virtualization: kvm guest
	I1013 21:59:57.785126  431667 out.go:179] * [pause-253311] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1013 21:59:57.786911  431667 notify.go:220] Checking for updates...
	I1013 21:59:57.786945  431667 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 21:59:57.788219  431667 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 21:59:57.789390  431667 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-226873/kubeconfig
	I1013 21:59:57.790659  431667 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-226873/.minikube
	I1013 21:59:57.791957  431667 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1013 21:59:57.793241  431667 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 21:59:56.942599  428671 cli_runner.go:164] Run: docker network inspect force-systemd-flag-886102 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 21:59:56.960528  428671 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1013 21:59:56.965015  428671 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 21:59:56.975948  428671 kubeadm.go:883] updating cluster {Name:force-systemd-flag-886102 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-886102 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1013 21:59:56.976131  428671 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 21:59:56.976209  428671 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 21:59:57.009759  428671 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 21:59:57.009780  428671 crio.go:433] Images already preloaded, skipping extraction
	I1013 21:59:57.009823  428671 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 21:59:57.036665  428671 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 21:59:57.036690  428671 cache_images.go:85] Images are preloaded, skipping loading
	I1013 21:59:57.036699  428671 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1013 21:59:57.036803  428671 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=force-systemd-flag-886102 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-886102 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1013 21:59:57.036875  428671 ssh_runner.go:195] Run: crio config
	I1013 21:59:57.086371  428671 cni.go:84] Creating CNI manager for ""
	I1013 21:59:57.086403  428671 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 21:59:57.086428  428671 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1013 21:59:57.086459  428671 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-flag-886102 NodeName:force-systemd-flag-886102 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1013 21:59:57.086628  428671 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "force-systemd-flag-886102"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1013 21:59:57.086707  428671 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1013 21:59:57.095364  428671 binaries.go:44] Found k8s binaries, skipping transfer
	I1013 21:59:57.095451  428671 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1013 21:59:57.103869  428671 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1013 21:59:57.117020  428671 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1013 21:59:57.133339  428671 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1013 21:59:57.146906  428671 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1013 21:59:57.150778  428671 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 21:59:57.161065  428671 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 21:59:57.245766  428671 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 21:59:57.272556  428671 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/force-systemd-flag-886102 for IP: 192.168.85.2
	I1013 21:59:57.272579  428671 certs.go:195] generating shared ca certs ...
	I1013 21:59:57.272595  428671 certs.go:227] acquiring lock for ca certs: {Name:mk5abdb742abbab05bc35d961f97579f54806d56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 21:59:57.272740  428671 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-226873/.minikube/ca.key
	I1013 21:59:57.272781  428671 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-226873/.minikube/proxy-client-ca.key
	I1013 21:59:57.272791  428671 certs.go:257] generating profile certs ...
	I1013 21:59:57.272846  428671 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/force-systemd-flag-886102/client.key
	I1013 21:59:57.272868  428671 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/force-systemd-flag-886102/client.crt with IP's: []
	I1013 21:59:57.354345  428671 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/force-systemd-flag-886102/client.crt ...
	I1013 21:59:57.354373  428671 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/force-systemd-flag-886102/client.crt: {Name:mk5f46a4eababedf56975f07072e9db31e052433 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 21:59:57.354541  428671 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/force-systemd-flag-886102/client.key ...
	I1013 21:59:57.354555  428671 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/force-systemd-flag-886102/client.key: {Name:mkbab824799bbb21f8a90267b7f855810b009e25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 21:59:57.354635  428671 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/force-systemd-flag-886102/apiserver.key.d940acbf
	I1013 21:59:57.354650  428671 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/force-systemd-flag-886102/apiserver.crt.d940acbf with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1013 21:59:57.590642  428671 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/force-systemd-flag-886102/apiserver.crt.d940acbf ...
	I1013 21:59:57.590671  428671 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/force-systemd-flag-886102/apiserver.crt.d940acbf: {Name:mk861c343d1992e080faf57ac9c9a2beac9b2fe6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 21:59:57.590885  428671 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/force-systemd-flag-886102/apiserver.key.d940acbf ...
	I1013 21:59:57.590905  428671 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/force-systemd-flag-886102/apiserver.key.d940acbf: {Name:mk58a307f517071b0add4abf7bd64070fdfe78f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 21:59:57.591036  428671 certs.go:382] copying /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/force-systemd-flag-886102/apiserver.crt.d940acbf -> /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/force-systemd-flag-886102/apiserver.crt
	I1013 21:59:57.591124  428671 certs.go:386] copying /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/force-systemd-flag-886102/apiserver.key.d940acbf -> /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/force-systemd-flag-886102/apiserver.key
	I1013 21:59:57.591180  428671 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/force-systemd-flag-886102/proxy-client.key
	I1013 21:59:57.591196  428671 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/force-systemd-flag-886102/proxy-client.crt with IP's: []
	I1013 21:59:57.795211  431667 config.go:182] Loaded profile config "pause-253311": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:59:57.795887  431667 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 21:59:57.823607  431667 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1013 21:59:57.823732  431667 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 21:59:57.894702  431667 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:80 OomKillDisable:false NGoroutines:87 SystemTime:2025-10-13 21:59:57.883471152 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652166656 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1013 21:59:57.894824  431667 docker.go:318] overlay module found
	I1013 21:59:57.896644  431667 out.go:179] * Using the docker driver based on existing profile
	I1013 21:59:57.898111  431667 start.go:305] selected driver: docker
	I1013 21:59:57.898129  431667 start.go:925] validating driver "docker" against &{Name:pause-253311 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-253311 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 21:59:57.898282  431667 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 21:59:57.898397  431667 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 21:59:57.971022  431667 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:80 OomKillDisable:false NGoroutines:87 SystemTime:2025-10-13 21:59:57.958915343 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652166656 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1013 21:59:57.971903  431667 cni.go:84] Creating CNI manager for ""
	I1013 21:59:57.971981  431667 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 21:59:57.972062  431667 start.go:349] cluster config:
	{Name:pause-253311 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-253311 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 21:59:57.974365  431667 out.go:179] * Starting "pause-253311" primary control-plane node in "pause-253311" cluster
	I1013 21:59:57.975751  431667 cache.go:123] Beginning downloading kic base image for docker with crio
	I1013 21:59:57.977605  431667 out.go:179] * Pulling base image v0.0.48-1760363564-21724 ...
	I1013 21:59:57.978891  431667 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 21:59:57.978941  431667 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon
	I1013 21:59:57.978955  431667 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-226873/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1013 21:59:57.978970  431667 cache.go:58] Caching tarball of preloaded images
	I1013 21:59:57.979109  431667 preload.go:233] Found /home/jenkins/minikube-integration/21724-226873/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1013 21:59:57.979125  431667 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1013 21:59:57.979271  431667 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/pause-253311/config.json ...
	I1013 21:59:58.004394  431667 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon, skipping pull
	I1013 21:59:58.004415  431667 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 exists in daemon, skipping load
	I1013 21:59:58.004433  431667 cache.go:232] Successfully downloaded all kic artifacts
	I1013 21:59:58.004463  431667 start.go:360] acquireMachinesLock for pause-253311: {Name:mk6b04fa29f2bc336f4d43e7e5f3cdef893fa6fe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 21:59:58.004526  431667 start.go:364] duration metric: took 39.034µs to acquireMachinesLock for "pause-253311"
	I1013 21:59:58.004547  431667 start.go:96] Skipping create...Using existing machine configuration
	I1013 21:59:58.004556  431667 fix.go:54] fixHost starting: 
	I1013 21:59:58.004814  431667 cli_runner.go:164] Run: docker container inspect pause-253311 --format={{.State.Status}}
	I1013 21:59:58.023481  431667 fix.go:112] recreateIfNeeded on pause-253311: state=Running err=<nil>
	W1013 21:59:58.023534  431667 fix.go:138] unexpected machine state, will restart: <nil>
	I1013 21:59:58.028537  431667 out.go:252] * Updating the running docker "pause-253311" container ...
	I1013 21:59:58.028581  431667 machine.go:93] provisionDockerMachine start ...
	I1013 21:59:58.028674  431667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-253311
	I1013 21:59:58.048585  431667 main.go:141] libmachine: Using SSH client type: native
	I1013 21:59:58.048822  431667 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33013 <nil> <nil>}
	I1013 21:59:58.048834  431667 main.go:141] libmachine: About to run SSH command:
	hostname
	I1013 21:59:58.189187  431667 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-253311
	
	I1013 21:59:58.189218  431667 ubuntu.go:182] provisioning hostname "pause-253311"
	I1013 21:59:58.189287  431667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-253311
	I1013 21:59:58.209303  431667 main.go:141] libmachine: Using SSH client type: native
	I1013 21:59:58.209553  431667 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33013 <nil> <nil>}
	I1013 21:59:58.209569  431667 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-253311 && echo "pause-253311" | sudo tee /etc/hostname
	I1013 21:59:58.361381  431667 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-253311
	
	I1013 21:59:58.361497  431667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-253311
	I1013 21:59:58.381361  431667 main.go:141] libmachine: Using SSH client type: native
	I1013 21:59:58.381608  431667 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33013 <nil> <nil>}
	I1013 21:59:58.381628  431667 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-253311' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-253311/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-253311' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1013 21:59:58.524946  431667 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 21:59:58.525004  431667 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21724-226873/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-226873/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-226873/.minikube}
	I1013 21:59:58.525030  431667 ubuntu.go:190] setting up certificates
	I1013 21:59:58.525043  431667 provision.go:84] configureAuth start
	I1013 21:59:58.525103  431667 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-253311
	I1013 21:59:58.545111  431667 provision.go:143] copyHostCerts
	I1013 21:59:58.545175  431667 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-226873/.minikube/ca.pem, removing ...
	I1013 21:59:58.545192  431667 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-226873/.minikube/ca.pem
	I1013 21:59:58.545263  431667 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-226873/.minikube/ca.pem (1078 bytes)
	I1013 21:59:58.545365  431667 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-226873/.minikube/cert.pem, removing ...
	I1013 21:59:58.545375  431667 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-226873/.minikube/cert.pem
	I1013 21:59:58.545401  431667 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-226873/.minikube/cert.pem (1123 bytes)
	I1013 21:59:58.545531  431667 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-226873/.minikube/key.pem, removing ...
	I1013 21:59:58.545542  431667 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-226873/.minikube/key.pem
	I1013 21:59:58.545566  431667 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-226873/.minikube/key.pem (1679 bytes)
	I1013 21:59:58.545632  431667 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-226873/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca-key.pem org=jenkins.pause-253311 san=[127.0.0.1 192.168.94.2 localhost minikube pause-253311]
	I1013 21:59:58.623400  431667 provision.go:177] copyRemoteCerts
	I1013 21:59:58.623471  431667 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1013 21:59:58.623511  431667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-253311
	I1013 21:59:58.645371  431667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33013 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/pause-253311/id_rsa Username:docker}
	I1013 21:59:58.749062  431667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1013 21:59:58.767265  431667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1013 21:59:58.785173  431667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1013 21:59:58.802874  431667 provision.go:87] duration metric: took 277.817212ms to configureAuth
	I1013 21:59:58.802901  431667 ubuntu.go:206] setting minikube options for container-runtime
	I1013 21:59:58.803176  431667 config.go:182] Loaded profile config "pause-253311": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:59:58.803298  431667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-253311
	I1013 21:59:58.821367  431667 main.go:141] libmachine: Using SSH client type: native
	I1013 21:59:58.821580  431667 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33013 <nil> <nil>}
	I1013 21:59:58.821597  431667 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1013 21:59:59.145842  431667 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1013 21:59:59.145871  431667 machine.go:96] duration metric: took 1.117280469s to provisionDockerMachine
	I1013 21:59:59.145886  431667 start.go:293] postStartSetup for "pause-253311" (driver="docker")
	I1013 21:59:59.145899  431667 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1013 21:59:59.145950  431667 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1013 21:59:59.146050  431667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-253311
	I1013 21:59:59.166667  431667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33013 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/pause-253311/id_rsa Username:docker}
	I1013 21:59:59.267059  431667 ssh_runner.go:195] Run: cat /etc/os-release
	I1013 21:59:59.271002  431667 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1013 21:59:59.271040  431667 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1013 21:59:59.271061  431667 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-226873/.minikube/addons for local assets ...
	I1013 21:59:59.271118  431667 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-226873/.minikube/files for local assets ...
	I1013 21:59:59.271221  431667 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-226873/.minikube/files/etc/ssl/certs/2309292.pem -> 2309292.pem in /etc/ssl/certs
	I1013 21:59:59.271347  431667 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1013 21:59:59.279310  431667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/files/etc/ssl/certs/2309292.pem --> /etc/ssl/certs/2309292.pem (1708 bytes)
	I1013 21:59:59.298174  431667 start.go:296] duration metric: took 152.268761ms for postStartSetup
	I1013 21:59:59.298256  431667 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1013 21:59:59.298317  431667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-253311
	I1013 21:59:59.317186  431667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33013 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/pause-253311/id_rsa Username:docker}
	I1013 21:59:59.414335  431667 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1013 21:59:59.420181  431667 fix.go:56] duration metric: took 1.415616609s for fixHost
	I1013 21:59:59.420210  431667 start.go:83] releasing machines lock for "pause-253311", held for 1.41567056s
	I1013 21:59:59.420304  431667 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-253311
	I1013 21:59:59.441689  431667 ssh_runner.go:195] Run: cat /version.json
	I1013 21:59:59.441755  431667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-253311
	I1013 21:59:59.441772  431667 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1013 21:59:59.441844  431667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-253311
	I1013 21:59:59.464616  431667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33013 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/pause-253311/id_rsa Username:docker}
	I1013 21:59:59.465752  431667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33013 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/pause-253311/id_rsa Username:docker}
	I1013 21:59:59.652736  431667 ssh_runner.go:195] Run: systemctl --version
	I1013 21:59:59.659765  431667 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1013 21:59:59.698790  431667 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1013 21:59:59.704075  431667 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1013 21:59:59.704143  431667 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1013 21:59:59.713473  431667 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1013 21:59:59.713506  431667 start.go:495] detecting cgroup driver to use...
	I1013 21:59:59.713542  431667 detect.go:190] detected "systemd" cgroup driver on host os
	I1013 21:59:59.713588  431667 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1013 21:59:59.730165  431667 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1013 21:59:59.744111  431667 docker.go:218] disabling cri-docker service (if available) ...
	I1013 21:59:59.744190  431667 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1013 21:59:59.759620  431667 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1013 21:59:59.773303  431667 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1013 21:59:59.886065  431667 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1013 22:00:00.002424  431667 docker.go:234] disabling docker service ...
	I1013 22:00:00.002515  431667 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1013 22:00:00.019799  431667 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1013 22:00:00.033056  431667 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1013 22:00:00.154708  431667 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1013 22:00:00.270508  431667 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1013 22:00:00.285164  431667 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1013 22:00:00.300485  431667 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1013 22:00:00.300548  431667 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:00:00.310040  431667 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1013 22:00:00.310114  431667 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:00:00.319595  431667 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:00:00.328525  431667 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:00:00.337860  431667 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1013 22:00:00.347051  431667 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:00:00.357767  431667 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:00:00.366795  431667 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:00:00.375949  431667 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1013 22:00:00.384019  431667 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1013 22:00:00.392139  431667 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:00:00.497654  431667 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1013 22:00:00.649283  431667 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1013 22:00:00.649364  431667 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1013 22:00:00.653935  431667 start.go:563] Will wait 60s for crictl version
	I1013 22:00:00.654025  431667 ssh_runner.go:195] Run: which crictl
	I1013 22:00:00.657801  431667 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1013 22:00:00.683871  431667 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1013 22:00:00.683959  431667 ssh_runner.go:195] Run: crio --version
	I1013 22:00:00.714926  431667 ssh_runner.go:195] Run: crio --version
	I1013 22:00:00.749949  431667 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1013 21:59:57.668098  410447 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1013 21:59:57.668516  410447 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1013 21:59:57.668565  410447 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1013 21:59:57.668628  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1013 21:59:57.705557  410447 cri.go:89] found id: "17016d7851618fa0d4027de50ae4987968e9ea6e570b0bdb1c29697f1e7b476c"
	I1013 21:59:57.705582  410447 cri.go:89] found id: ""
	I1013 21:59:57.705595  410447 logs.go:282] 1 containers: [17016d7851618fa0d4027de50ae4987968e9ea6e570b0bdb1c29697f1e7b476c]
	I1013 21:59:57.705657  410447 ssh_runner.go:195] Run: which crictl
	I1013 21:59:57.710731  410447 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1013 21:59:57.710798  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1013 21:59:57.744656  410447 cri.go:89] found id: ""
	I1013 21:59:57.744742  410447 logs.go:282] 0 containers: []
	W1013 21:59:57.744759  410447 logs.go:284] No container was found matching "etcd"
	I1013 21:59:57.744768  410447 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1013 21:59:57.744838  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1013 21:59:57.777004  410447 cri.go:89] found id: ""
	I1013 21:59:57.777034  410447 logs.go:282] 0 containers: []
	W1013 21:59:57.777045  410447 logs.go:284] No container was found matching "coredns"
	I1013 21:59:57.777052  410447 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1013 21:59:57.777109  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1013 21:59:57.810839  410447 cri.go:89] found id: "d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110"
	I1013 21:59:57.810867  410447 cri.go:89] found id: ""
	I1013 21:59:57.810879  410447 logs.go:282] 1 containers: [d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110]
	I1013 21:59:57.810957  410447 ssh_runner.go:195] Run: which crictl
	I1013 21:59:57.815734  410447 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1013 21:59:57.815849  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1013 21:59:57.850702  410447 cri.go:89] found id: ""
	I1013 21:59:57.850827  410447 logs.go:282] 0 containers: []
	W1013 21:59:57.850841  410447 logs.go:284] No container was found matching "kube-proxy"
	I1013 21:59:57.850850  410447 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1013 21:59:57.850928  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1013 21:59:57.887395  410447 cri.go:89] found id: "e977b2f297ecb2613f43d4990ec61bcabc490b9bd0c61ff71bbffd9249b63c27"
	I1013 21:59:57.887422  410447 cri.go:89] found id: ""
	I1013 21:59:57.887431  410447 logs.go:282] 1 containers: [e977b2f297ecb2613f43d4990ec61bcabc490b9bd0c61ff71bbffd9249b63c27]
	I1013 21:59:57.887487  410447 ssh_runner.go:195] Run: which crictl
	I1013 21:59:57.892145  410447 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1013 21:59:57.892233  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1013 21:59:57.928571  410447 cri.go:89] found id: ""
	I1013 21:59:57.928607  410447 logs.go:282] 0 containers: []
	W1013 21:59:57.928619  410447 logs.go:284] No container was found matching "kindnet"
	I1013 21:59:57.928629  410447 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1013 21:59:57.928691  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1013 21:59:57.966908  410447 cri.go:89] found id: ""
	I1013 21:59:57.966936  410447 logs.go:282] 0 containers: []
	W1013 21:59:57.966947  410447 logs.go:284] No container was found matching "storage-provisioner"
	I1013 21:59:57.966961  410447 logs.go:123] Gathering logs for kubelet ...
	I1013 21:59:57.966977  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1013 21:59:58.042738  410447 logs.go:123] Gathering logs for dmesg ...
	I1013 21:59:58.042774  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1013 21:59:58.058626  410447 logs.go:123] Gathering logs for describe nodes ...
	I1013 21:59:58.058654  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1013 21:59:58.122804  410447 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1013 21:59:58.122823  410447 logs.go:123] Gathering logs for kube-apiserver [17016d7851618fa0d4027de50ae4987968e9ea6e570b0bdb1c29697f1e7b476c] ...
	I1013 21:59:58.122841  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 17016d7851618fa0d4027de50ae4987968e9ea6e570b0bdb1c29697f1e7b476c"
	I1013 21:59:58.159010  410447 logs.go:123] Gathering logs for kube-scheduler [d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110] ...
	I1013 21:59:58.159046  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110"
	I1013 21:59:58.215545  410447 logs.go:123] Gathering logs for kube-controller-manager [e977b2f297ecb2613f43d4990ec61bcabc490b9bd0c61ff71bbffd9249b63c27] ...
	I1013 21:59:58.215589  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e977b2f297ecb2613f43d4990ec61bcabc490b9bd0c61ff71bbffd9249b63c27"
	I1013 21:59:58.245616  410447 logs.go:123] Gathering logs for CRI-O ...
	I1013 21:59:58.245640  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1013 21:59:58.290999  410447 logs.go:123] Gathering logs for container status ...
	I1013 21:59:58.291042  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1013 22:00:00.826058  410447 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1013 22:00:00.751048  431667 cli_runner.go:164] Run: docker network inspect pause-253311 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 22:00:00.769499  431667 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1013 22:00:00.774072  431667 kubeadm.go:883] updating cluster {Name:pause-253311 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-253311 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1013 22:00:00.774226  431667 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 22:00:00.774285  431667 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 22:00:00.809288  431667 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 22:00:00.809318  431667 crio.go:433] Images already preloaded, skipping extraction
	I1013 22:00:00.809374  431667 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 22:00:00.838177  431667 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 22:00:00.838202  431667 cache_images.go:85] Images are preloaded, skipping loading
	I1013 22:00:00.838212  431667 kubeadm.go:934] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1013 22:00:00.838345  431667 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-253311 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-253311 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1013 22:00:00.838438  431667 ssh_runner.go:195] Run: crio config
	I1013 22:00:00.889245  431667 cni.go:84] Creating CNI manager for ""
	I1013 22:00:00.889267  431667 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 22:00:00.889285  431667 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1013 22:00:00.889306  431667 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-253311 NodeName:pause-253311 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1013 22:00:00.889445  431667 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-253311"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1013 22:00:00.889514  431667 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1013 22:00:00.898363  431667 binaries.go:44] Found k8s binaries, skipping transfer
	I1013 22:00:00.898452  431667 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1013 22:00:00.906643  431667 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1013 22:00:00.920485  431667 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1013 22:00:00.933952  431667 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1013 22:00:00.948495  431667 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1013 22:00:00.953302  431667 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:00:01.061259  431667 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 22:00:01.075314  431667 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/pause-253311 for IP: 192.168.94.2
	I1013 22:00:01.075340  431667 certs.go:195] generating shared ca certs ...
	I1013 22:00:01.075361  431667 certs.go:227] acquiring lock for ca certs: {Name:mk5abdb742abbab05bc35d961f97579f54806d56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:00:01.075524  431667 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-226873/.minikube/ca.key
	I1013 22:00:01.075575  431667 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-226873/.minikube/proxy-client-ca.key
	I1013 22:00:01.075589  431667 certs.go:257] generating profile certs ...
	I1013 22:00:01.075685  431667 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/pause-253311/client.key
	I1013 22:00:01.075744  431667 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/pause-253311/apiserver.key.782ab978
	I1013 22:00:01.075800  431667 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/pause-253311/proxy-client.key
	I1013 22:00:01.075947  431667 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/230929.pem (1338 bytes)
	W1013 22:00:01.075986  431667 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-226873/.minikube/certs/230929_empty.pem, impossibly tiny 0 bytes
	I1013 22:00:01.076014  431667 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca-key.pem (1675 bytes)
	I1013 22:00:01.076045  431667 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem (1078 bytes)
	I1013 22:00:01.076085  431667 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/cert.pem (1123 bytes)
	I1013 22:00:01.076115  431667 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/key.pem (1679 bytes)
	I1013 22:00:01.076168  431667 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/files/etc/ssl/certs/2309292.pem (1708 bytes)
	I1013 22:00:01.076915  431667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1013 22:00:01.096525  431667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1013 22:00:01.114782  431667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1013 22:00:01.133074  431667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1013 22:00:01.152052  431667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/pause-253311/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1013 22:00:01.170814  431667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/pause-253311/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1013 22:00:01.189243  431667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/pause-253311/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1013 22:00:01.207124  431667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/pause-253311/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1013 22:00:01.226429  431667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/files/etc/ssl/certs/2309292.pem --> /usr/share/ca-certificates/2309292.pem (1708 bytes)
	I1013 22:00:01.245184  431667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1013 22:00:01.263655  431667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/certs/230929.pem --> /usr/share/ca-certificates/230929.pem (1338 bytes)
	I1013 22:00:01.282219  431667 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1013 22:00:01.295973  431667 ssh_runner.go:195] Run: openssl version
	I1013 22:00:01.302907  431667 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1013 22:00:01.312630  431667 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:00:01.316697  431667 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 13 21:18 /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:00:01.316757  431667 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:00:01.353622  431667 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1013 22:00:01.362800  431667 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/230929.pem && ln -fs /usr/share/ca-certificates/230929.pem /etc/ssl/certs/230929.pem"
	I1013 22:00:01.372327  431667 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/230929.pem
	I1013 22:00:01.376487  431667 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 13 21:24 /usr/share/ca-certificates/230929.pem
	I1013 22:00:01.376551  431667 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/230929.pem
	I1013 22:00:01.412416  431667 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/230929.pem /etc/ssl/certs/51391683.0"
	I1013 22:00:01.421515  431667 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2309292.pem && ln -fs /usr/share/ca-certificates/2309292.pem /etc/ssl/certs/2309292.pem"
	I1013 22:00:01.430805  431667 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2309292.pem
	I1013 22:00:01.434798  431667 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 13 21:24 /usr/share/ca-certificates/2309292.pem
	I1013 22:00:01.434860  431667 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2309292.pem
	I1013 22:00:01.470685  431667 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2309292.pem /etc/ssl/certs/3ec20f2e.0"
	I1013 22:00:01.479837  431667 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1013 22:00:01.484379  431667 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1013 22:00:01.520445  431667 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1013 22:00:01.557514  431667 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1013 22:00:01.594409  431667 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1013 22:00:01.629848  431667 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1013 22:00:01.668421  431667 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1013 22:00:01.712571  431667 kubeadm.go:400] StartCluster: {Name:pause-253311 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-253311 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 22:00:01.712748  431667 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 22:00:01.712820  431667 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 22:00:01.748389  431667 cri.go:89] found id: "72e4da29ff2020997167660dbf5e577efbcb79f4cf72f7406a5a3d592c3753d8"
	I1013 22:00:01.748416  431667 cri.go:89] found id: "2631ab18f564077ad577f6e26dabadd22f99644287d0fc7fd642f0231e2fb504"
	I1013 22:00:01.748423  431667 cri.go:89] found id: "3ff5f4fa83e6bdb9765489cb27d134699de2c145759b39c1ebb50b69637599e4"
	I1013 22:00:01.748428  431667 cri.go:89] found id: "9f36be9b9e29aac5bebbf06e4ae167def223bbd61742d5ff1d3ae7b5b075414d"
	I1013 22:00:01.748433  431667 cri.go:89] found id: "72a7985084091f5ff6e2d6afc41a7b849fe6f6bb0d7e00caad660ce2d8be6fae"
	I1013 22:00:01.748438  431667 cri.go:89] found id: "ae87bf0613601bc98b12e73093e02e301d4f94c0099edc7e8d9a3dfc637ea701"
	I1013 22:00:01.748463  431667 cri.go:89] found id: "ffd8882a9dc8994ddf3be9937257ffa6da74907717ea8751def522f9a2473b89"
	I1013 22:00:01.748476  431667 cri.go:89] found id: ""
	I1013 22:00:01.748527  431667 ssh_runner.go:195] Run: sudo runc list -f json
	W1013 22:00:01.761063  431667 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:00:01Z" level=error msg="open /run/runc: no such file or directory"
	I1013 22:00:01.761156  431667 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1013 22:00:01.769686  431667 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1013 22:00:01.769710  431667 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1013 22:00:01.769761  431667 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1013 22:00:01.777919  431667 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1013 22:00:01.778864  431667 kubeconfig.go:125] found "pause-253311" server: "https://192.168.94.2:8443"
	I1013 22:00:01.780193  431667 kapi.go:59] client config for pause-253311: &rest.Config{Host:"https://192.168.94.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21724-226873/.minikube/profiles/pause-253311/client.crt", KeyFile:"/home/jenkins/minikube-integration/21724-226873/.minikube/profiles/pause-253311/client.key", CAFile:"/home/jenkins/minikube-integration/21724-226873/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1013 22:00:01.780728  431667 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1013 22:00:01.780747  431667 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1013 22:00:01.780754  431667 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1013 22:00:01.780759  431667 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1013 22:00:01.780778  431667 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1013 22:00:01.781263  431667 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1013 22:00:01.789881  431667 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.94.2
	I1013 22:00:01.789916  431667 kubeadm.go:601] duration metric: took 20.199424ms to restartPrimaryControlPlane
	I1013 22:00:01.789928  431667 kubeadm.go:402] duration metric: took 77.371264ms to StartCluster
	I1013 22:00:01.789947  431667 settings.go:142] acquiring lock: {Name:mk13008e3b2fce0e368bddbf00d43b8340210d33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:00:01.790061  431667 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-226873/kubeconfig
	I1013 22:00:01.791544  431667 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/kubeconfig: {Name:mk2f336b13d09ff6e6da9e86905651541ce51ca0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:00:01.791759  431667 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1013 22:00:01.791845  431667 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1013 22:00:01.792109  431667 config.go:182] Loaded profile config "pause-253311": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:00:01.794238  431667 out.go:179] * Verifying Kubernetes components...
	I1013 22:00:01.794238  431667 out.go:179] * Enabled addons: 
	I1013 22:00:01.795413  431667 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:00:01.795410  431667 addons.go:514] duration metric: took 3.578896ms for enable addons: enabled=[]
	I1013 22:00:01.912981  431667 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 22:00:01.927630  431667 node_ready.go:35] waiting up to 6m0s for node "pause-253311" to be "Ready" ...
	I1013 22:00:01.936529  431667 node_ready.go:49] node "pause-253311" is "Ready"
	I1013 22:00:01.936559  431667 node_ready.go:38] duration metric: took 8.878515ms for node "pause-253311" to be "Ready" ...
	I1013 22:00:01.936577  431667 api_server.go:52] waiting for apiserver process to appear ...
	I1013 22:00:01.936637  431667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 22:00:01.949780  431667 api_server.go:72] duration metric: took 157.98836ms to wait for apiserver process to appear ...
	I1013 22:00:01.949812  431667 api_server.go:88] waiting for apiserver healthz status ...
	I1013 22:00:01.949836  431667 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1013 22:00:01.954261  431667 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1013 22:00:01.955340  431667 api_server.go:141] control plane version: v1.34.1
	I1013 22:00:01.955366  431667 api_server.go:131] duration metric: took 5.546556ms to wait for apiserver health ...
	I1013 22:00:01.955375  431667 system_pods.go:43] waiting for kube-system pods to appear ...
	I1013 22:00:01.958328  431667 system_pods.go:59] 7 kube-system pods found
	I1013 22:00:01.958352  431667 system_pods.go:61] "coredns-66bc5c9577-p7jvh" [93b118c8-6a99-4f2e-be68-cd05c9c12326] Running
	I1013 22:00:01.958357  431667 system_pods.go:61] "etcd-pause-253311" [9475b990-950e-45f5-a488-8a553ccd04ba] Running
	I1013 22:00:01.958361  431667 system_pods.go:61] "kindnet-2htsm" [b31b98f6-f12a-473d-9d04-be38b7c1ee1c] Running
	I1013 22:00:01.958365  431667 system_pods.go:61] "kube-apiserver-pause-253311" [3c26d989-c0a5-42cc-ae92-6cf32762ba2a] Running
	I1013 22:00:01.958369  431667 system_pods.go:61] "kube-controller-manager-pause-253311" [050c0665-1dab-4d73-9bd2-6edfd011e15e] Running
	I1013 22:00:01.958373  431667 system_pods.go:61] "kube-proxy-szdxg" [a882d7c7-03ef-4810-a8d3-4358c1a75e9b] Running
	I1013 22:00:01.958376  431667 system_pods.go:61] "kube-scheduler-pause-253311" [0e46eb0a-1eb7-4cd7-9e47-5cbe24c401fe] Running
	I1013 22:00:01.958381  431667 system_pods.go:74] duration metric: took 2.999479ms to wait for pod list to return data ...
	I1013 22:00:01.958389  431667 default_sa.go:34] waiting for default service account to be created ...
	I1013 22:00:01.960258  431667 default_sa.go:45] found service account: "default"
	I1013 22:00:01.960275  431667 default_sa.go:55] duration metric: took 1.880243ms for default service account to be created ...
	I1013 22:00:01.960283  431667 system_pods.go:116] waiting for k8s-apps to be running ...
	I1013 22:00:01.962973  431667 system_pods.go:86] 7 kube-system pods found
	I1013 22:00:01.963027  431667 system_pods.go:89] "coredns-66bc5c9577-p7jvh" [93b118c8-6a99-4f2e-be68-cd05c9c12326] Running
	I1013 22:00:01.963035  431667 system_pods.go:89] "etcd-pause-253311" [9475b990-950e-45f5-a488-8a553ccd04ba] Running
	I1013 22:00:01.963051  431667 system_pods.go:89] "kindnet-2htsm" [b31b98f6-f12a-473d-9d04-be38b7c1ee1c] Running
	I1013 22:00:01.963059  431667 system_pods.go:89] "kube-apiserver-pause-253311" [3c26d989-c0a5-42cc-ae92-6cf32762ba2a] Running
	I1013 22:00:01.963065  431667 system_pods.go:89] "kube-controller-manager-pause-253311" [050c0665-1dab-4d73-9bd2-6edfd011e15e] Running
	I1013 22:00:01.963076  431667 system_pods.go:89] "kube-proxy-szdxg" [a882d7c7-03ef-4810-a8d3-4358c1a75e9b] Running
	I1013 22:00:01.963082  431667 system_pods.go:89] "kube-scheduler-pause-253311" [0e46eb0a-1eb7-4cd7-9e47-5cbe24c401fe] Running
	I1013 22:00:01.963094  431667 system_pods.go:126] duration metric: took 2.804702ms to wait for k8s-apps to be running ...
	I1013 22:00:01.963106  431667 system_svc.go:44] waiting for kubelet service to be running ....
	I1013 22:00:01.963156  431667 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 22:00:01.977208  431667 system_svc.go:56] duration metric: took 14.091815ms WaitForService to wait for kubelet
	I1013 22:00:01.977237  431667 kubeadm.go:586] duration metric: took 185.451932ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 22:00:01.977253  431667 node_conditions.go:102] verifying NodePressure condition ...
	I1013 22:00:01.979803  431667 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1013 22:00:01.979833  431667 node_conditions.go:123] node cpu capacity is 8
	I1013 22:00:01.979848  431667 node_conditions.go:105] duration metric: took 2.589128ms to run NodePressure ...
	I1013 22:00:01.979864  431667 start.go:241] waiting for startup goroutines ...
	I1013 22:00:01.979874  431667 start.go:246] waiting for cluster config update ...
	I1013 22:00:01.979885  431667 start.go:255] writing updated cluster config ...
	I1013 22:00:01.980258  431667 ssh_runner.go:195] Run: rm -f paused
	I1013 22:00:01.984137  431667 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 22:00:01.984810  431667 kapi.go:59] client config for pause-253311: &rest.Config{Host:"https://192.168.94.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21724-226873/.minikube/profiles/pause-253311/client.crt", KeyFile:"/home/jenkins/minikube-integration/21724-226873/.minikube/profiles/pause-253311/client.key", CAFile:"/home/jenkins/minikube-integration/21724-226873/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1013 22:00:01.987328  431667 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-p7jvh" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:00:01.991198  431667 pod_ready.go:94] pod "coredns-66bc5c9577-p7jvh" is "Ready"
	I1013 22:00:01.991218  431667 pod_ready.go:86] duration metric: took 3.865654ms for pod "coredns-66bc5c9577-p7jvh" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:00:01.993076  431667 pod_ready.go:83] waiting for pod "etcd-pause-253311" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:00:01.996780  431667 pod_ready.go:94] pod "etcd-pause-253311" is "Ready"
	I1013 22:00:01.996802  431667 pod_ready.go:86] duration metric: took 3.70823ms for pod "etcd-pause-253311" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:00:01.998609  431667 pod_ready.go:83] waiting for pod "kube-apiserver-pause-253311" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:00:02.002082  431667 pod_ready.go:94] pod "kube-apiserver-pause-253311" is "Ready"
	I1013 22:00:02.002100  431667 pod_ready.go:86] duration metric: took 3.471975ms for pod "kube-apiserver-pause-253311" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:00:02.003840  431667 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-253311" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:00:02.388058  431667 pod_ready.go:94] pod "kube-controller-manager-pause-253311" is "Ready"
	I1013 22:00:02.388090  431667 pod_ready.go:86] duration metric: took 384.225894ms for pod "kube-controller-manager-pause-253311" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:00:02.588245  431667 pod_ready.go:83] waiting for pod "kube-proxy-szdxg" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 21:59:57.906128  428671 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/force-systemd-flag-886102/proxy-client.crt ...
	I1013 21:59:57.906159  428671 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/force-systemd-flag-886102/proxy-client.crt: {Name:mk4acbd803a34212e4c203dca741a315003adeb1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 21:59:57.906356  428671 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/force-systemd-flag-886102/proxy-client.key ...
	I1013 21:59:57.906375  428671 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/force-systemd-flag-886102/proxy-client.key: {Name:mke6d973df5ec7a918e960b9689f0228a6e96ba7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 21:59:57.906486  428671 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21724-226873/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1013 21:59:57.906508  428671 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21724-226873/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1013 21:59:57.906519  428671 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21724-226873/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1013 21:59:57.906533  428671 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21724-226873/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1013 21:59:57.906559  428671 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/force-systemd-flag-886102/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1013 21:59:57.906573  428671 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/force-systemd-flag-886102/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1013 21:59:57.906588  428671 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/force-systemd-flag-886102/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1013 21:59:57.906608  428671 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/force-systemd-flag-886102/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1013 21:59:57.906679  428671 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/230929.pem (1338 bytes)
	W1013 21:59:57.906738  428671 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-226873/.minikube/certs/230929_empty.pem, impossibly tiny 0 bytes
	I1013 21:59:57.906752  428671 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca-key.pem (1675 bytes)
	I1013 21:59:57.906781  428671 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem (1078 bytes)
	I1013 21:59:57.906814  428671 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/cert.pem (1123 bytes)
	I1013 21:59:57.906849  428671 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/key.pem (1679 bytes)
	I1013 21:59:57.906973  428671 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/files/etc/ssl/certs/2309292.pem (1708 bytes)
	I1013 21:59:57.907034  428671 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21724-226873/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1013 21:59:57.907057  428671 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/230929.pem -> /usr/share/ca-certificates/230929.pem
	I1013 21:59:57.907076  428671 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21724-226873/.minikube/files/etc/ssl/certs/2309292.pem -> /usr/share/ca-certificates/2309292.pem
	I1013 21:59:57.907618  428671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1013 21:59:57.934740  428671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1013 21:59:57.961892  428671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1013 21:59:57.986915  428671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1013 21:59:58.009124  428671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/force-systemd-flag-886102/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1013 21:59:58.028520  428671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/force-systemd-flag-886102/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1013 21:59:58.047776  428671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/force-systemd-flag-886102/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1013 21:59:58.066828  428671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/force-systemd-flag-886102/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1013 21:59:58.086092  428671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1013 21:59:58.107049  428671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/certs/230929.pem --> /usr/share/ca-certificates/230929.pem (1338 bytes)
	I1013 21:59:58.127536  428671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/files/etc/ssl/certs/2309292.pem --> /usr/share/ca-certificates/2309292.pem (1708 bytes)
	I1013 21:59:58.147757  428671 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1013 21:59:58.162543  428671 ssh_runner.go:195] Run: openssl version
	I1013 21:59:58.169090  428671 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1013 21:59:58.178803  428671 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1013 21:59:58.182938  428671 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 13 21:18 /usr/share/ca-certificates/minikubeCA.pem
	I1013 21:59:58.183023  428671 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1013 21:59:58.232501  428671 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1013 21:59:58.243505  428671 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/230929.pem && ln -fs /usr/share/ca-certificates/230929.pem /etc/ssl/certs/230929.pem"
	I1013 21:59:58.253425  428671 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/230929.pem
	I1013 21:59:58.257692  428671 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 13 21:24 /usr/share/ca-certificates/230929.pem
	I1013 21:59:58.257758  428671 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/230929.pem
	I1013 21:59:58.297118  428671 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/230929.pem /etc/ssl/certs/51391683.0"
	I1013 21:59:58.306772  428671 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2309292.pem && ln -fs /usr/share/ca-certificates/2309292.pem /etc/ssl/certs/2309292.pem"
	I1013 21:59:58.316122  428671 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2309292.pem
	I1013 21:59:58.321300  428671 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 13 21:24 /usr/share/ca-certificates/2309292.pem
	I1013 21:59:58.321365  428671 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2309292.pem
	I1013 21:59:58.358723  428671 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2309292.pem /etc/ssl/certs/3ec20f2e.0"
	I1013 21:59:58.368338  428671 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1013 21:59:58.372169  428671 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1013 21:59:58.372230  428671 kubeadm.go:400] StartCluster: {Name:force-systemd-flag-886102 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-886102 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 21:59:58.372291  428671 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 21:59:58.372354  428671 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 21:59:58.405195  428671 cri.go:89] found id: ""
	I1013 21:59:58.405267  428671 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1013 21:59:58.414133  428671 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1013 21:59:58.422277  428671 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1013 21:59:58.422332  428671 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1013 21:59:58.431052  428671 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1013 21:59:58.431073  428671 kubeadm.go:157] found existing configuration files:
	
	I1013 21:59:58.431124  428671 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1013 21:59:58.439714  428671 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1013 21:59:58.439785  428671 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1013 21:59:58.447909  428671 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1013 21:59:58.456129  428671 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1013 21:59:58.456213  428671 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1013 21:59:58.464454  428671 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1013 21:59:58.472763  428671 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1013 21:59:58.472834  428671 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1013 21:59:58.481309  428671 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1013 21:59:58.489297  428671 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1013 21:59:58.489350  428671 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1013 21:59:58.497119  428671 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1013 21:59:58.561744  428671 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1013 21:59:58.645318  428671 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1013 22:00:02.988063  431667 pod_ready.go:94] pod "kube-proxy-szdxg" is "Ready"
	I1013 22:00:02.988090  431667 pod_ready.go:86] duration metric: took 399.816507ms for pod "kube-proxy-szdxg" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:00:03.188274  431667 pod_ready.go:83] waiting for pod "kube-scheduler-pause-253311" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:00:03.589239  431667 pod_ready.go:94] pod "kube-scheduler-pause-253311" is "Ready"
	I1013 22:00:03.589271  431667 pod_ready.go:86] duration metric: took 400.96385ms for pod "kube-scheduler-pause-253311" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:00:03.589287  431667 pod_ready.go:40] duration metric: took 1.605112654s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 22:00:03.652833  431667 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1013 22:00:03.655235  431667 out.go:179] * Done! kubectl is now configured to use "pause-253311" cluster and "default" namespace by default
	I1013 22:00:05.829140  410447 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1013 22:00:05.829230  410447 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1013 22:00:05.829331  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1013 22:00:05.862067  410447 cri.go:89] found id: "7d05cfad3344d068d56b937fab95c0cd0c49de0523366c64007456d3d535d996"
	I1013 22:00:05.862093  410447 cri.go:89] found id: "17016d7851618fa0d4027de50ae4987968e9ea6e570b0bdb1c29697f1e7b476c"
	I1013 22:00:05.862099  410447 cri.go:89] found id: ""
	I1013 22:00:05.862110  410447 logs.go:282] 2 containers: [7d05cfad3344d068d56b937fab95c0cd0c49de0523366c64007456d3d535d996 17016d7851618fa0d4027de50ae4987968e9ea6e570b0bdb1c29697f1e7b476c]
	I1013 22:00:05.862173  410447 ssh_runner.go:195] Run: which crictl
	I1013 22:00:05.867490  410447 ssh_runner.go:195] Run: which crictl
	I1013 22:00:05.872040  410447 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1013 22:00:05.872114  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1013 22:00:05.904364  410447 cri.go:89] found id: ""
	I1013 22:00:05.904394  410447 logs.go:282] 0 containers: []
	W1013 22:00:05.904404  410447 logs.go:284] No container was found matching "etcd"
	I1013 22:00:05.904412  410447 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1013 22:00:05.904487  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1013 22:00:05.937481  410447 cri.go:89] found id: ""
	I1013 22:00:05.937513  410447 logs.go:282] 0 containers: []
	W1013 22:00:05.937555  410447 logs.go:284] No container was found matching "coredns"
	I1013 22:00:05.937565  410447 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1013 22:00:05.937624  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1013 22:00:05.972527  410447 cri.go:89] found id: "d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110"
	I1013 22:00:05.972553  410447 cri.go:89] found id: ""
	I1013 22:00:05.972563  410447 logs.go:282] 1 containers: [d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110]
	I1013 22:00:05.972623  410447 ssh_runner.go:195] Run: which crictl
	I1013 22:00:05.977603  410447 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1013 22:00:05.977740  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1013 22:00:06.015432  410447 cri.go:89] found id: ""
	I1013 22:00:06.015464  410447 logs.go:282] 0 containers: []
	W1013 22:00:06.015473  410447 logs.go:284] No container was found matching "kube-proxy"
	I1013 22:00:06.015479  410447 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1013 22:00:06.015546  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1013 22:00:06.051102  410447 cri.go:89] found id: "e977b2f297ecb2613f43d4990ec61bcabc490b9bd0c61ff71bbffd9249b63c27"
	I1013 22:00:06.051130  410447 cri.go:89] found id: ""
	I1013 22:00:06.051140  410447 logs.go:282] 1 containers: [e977b2f297ecb2613f43d4990ec61bcabc490b9bd0c61ff71bbffd9249b63c27]
	I1013 22:00:06.051200  410447 ssh_runner.go:195] Run: which crictl
	I1013 22:00:06.056503  410447 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1013 22:00:06.056586  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1013 22:00:06.095546  410447 cri.go:89] found id: ""
	I1013 22:00:06.095574  410447 logs.go:282] 0 containers: []
	W1013 22:00:06.095584  410447 logs.go:284] No container was found matching "kindnet"
	I1013 22:00:06.095591  410447 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1013 22:00:06.095650  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1013 22:00:06.128407  410447 cri.go:89] found id: ""
	I1013 22:00:06.128439  410447 logs.go:282] 0 containers: []
	W1013 22:00:06.128451  410447 logs.go:284] No container was found matching "storage-provisioner"
	I1013 22:00:06.128469  410447 logs.go:123] Gathering logs for kube-scheduler [d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110] ...
	I1013 22:00:06.128486  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110"
	I1013 22:00:06.184944  410447 logs.go:123] Gathering logs for container status ...
	I1013 22:00:06.184980  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1013 22:00:06.223800  410447 logs.go:123] Gathering logs for kubelet ...
	I1013 22:00:06.223832  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1013 22:00:06.300933  410447 logs.go:123] Gathering logs for dmesg ...
	I1013 22:00:06.300970  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1013 22:00:06.316949  410447 logs.go:123] Gathering logs for kube-apiserver [7d05cfad3344d068d56b937fab95c0cd0c49de0523366c64007456d3d535d996] ...
	I1013 22:00:06.316980  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7d05cfad3344d068d56b937fab95c0cd0c49de0523366c64007456d3d535d996"
	I1013 22:00:06.355330  410447 logs.go:123] Gathering logs for kube-apiserver [17016d7851618fa0d4027de50ae4987968e9ea6e570b0bdb1c29697f1e7b476c] ...
	I1013 22:00:06.355361  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 17016d7851618fa0d4027de50ae4987968e9ea6e570b0bdb1c29697f1e7b476c"
	I1013 22:00:06.389354  410447 logs.go:123] Gathering logs for kube-controller-manager [e977b2f297ecb2613f43d4990ec61bcabc490b9bd0c61ff71bbffd9249b63c27] ...
	I1013 22:00:06.389383  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e977b2f297ecb2613f43d4990ec61bcabc490b9bd0c61ff71bbffd9249b63c27"
	I1013 22:00:06.418636  410447 logs.go:123] Gathering logs for CRI-O ...
	I1013 22:00:06.418662  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1013 22:00:06.470954  410447 logs.go:123] Gathering logs for describe nodes ...
	I1013 22:00:06.471008  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	
	
	==> CRI-O <==
	Oct 13 22:00:00 pause-253311 crio[2189]: time="2025-10-13T22:00:00.591688546Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Oct 13 22:00:00 pause-253311 crio[2189]: time="2025-10-13T22:00:00.592472149Z" level=info msg="Conmon does support the --sync option"
	Oct 13 22:00:00 pause-253311 crio[2189]: time="2025-10-13T22:00:00.592491321Z" level=info msg="Conmon does support the --log-global-size-max option"
	Oct 13 22:00:00 pause-253311 crio[2189]: time="2025-10-13T22:00:00.592505918Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Oct 13 22:00:00 pause-253311 crio[2189]: time="2025-10-13T22:00:00.593199356Z" level=info msg="Conmon does support the --sync option"
	Oct 13 22:00:00 pause-253311 crio[2189]: time="2025-10-13T22:00:00.593215439Z" level=info msg="Conmon does support the --log-global-size-max option"
	Oct 13 22:00:00 pause-253311 crio[2189]: time="2025-10-13T22:00:00.597241464Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 13 22:00:00 pause-253311 crio[2189]: time="2025-10-13T22:00:00.597274357Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 13 22:00:00 pause-253311 crio[2189]: time="2025-10-13T22:00:00.597779302Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = true\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/c
ni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"/
var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Oct 13 22:00:00 pause-253311 crio[2189]: time="2025-10-13T22:00:00.598175607Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Oct 13 22:00:00 pause-253311 crio[2189]: time="2025-10-13T22:00:00.598222174Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Oct 13 22:00:00 pause-253311 crio[2189]: time="2025-10-13T22:00:00.603924731Z" level=info msg="No kernel support for IPv6: could not find nftables binary: exec: \"nft\": executable file not found in $PATH"
	Oct 13 22:00:00 pause-253311 crio[2189]: time="2025-10-13T22:00:00.644125205Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-p7jvh Namespace:kube-system ID:ba3ba13152bbcc9c77c5610bad2235945be542cf249dfe7531ecac3d6d151119 UID:93b118c8-6a99-4f2e-be68-cd05c9c12326 NetNS:/var/run/netns/ec2eaf68-6adc-420e-bc31-7ca2fcfe611a Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0007900e0}] Aliases:map[]}"
	Oct 13 22:00:00 pause-253311 crio[2189]: time="2025-10-13T22:00:00.644312056Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-p7jvh for CNI network kindnet (type=ptp)"
	Oct 13 22:00:00 pause-253311 crio[2189]: time="2025-10-13T22:00:00.644724818Z" level=info msg="Registered SIGHUP reload watcher"
	Oct 13 22:00:00 pause-253311 crio[2189]: time="2025-10-13T22:00:00.644762741Z" level=info msg="Starting seccomp notifier watcher"
	Oct 13 22:00:00 pause-253311 crio[2189]: time="2025-10-13T22:00:00.644820203Z" level=info msg="Create NRI interface"
	Oct 13 22:00:00 pause-253311 crio[2189]: time="2025-10-13T22:00:00.644946147Z" level=info msg="built-in NRI default validator is disabled"
	Oct 13 22:00:00 pause-253311 crio[2189]: time="2025-10-13T22:00:00.644961513Z" level=info msg="runtime interface created"
	Oct 13 22:00:00 pause-253311 crio[2189]: time="2025-10-13T22:00:00.644974925Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Oct 13 22:00:00 pause-253311 crio[2189]: time="2025-10-13T22:00:00.64498412Z" level=info msg="runtime interface starting up..."
	Oct 13 22:00:00 pause-253311 crio[2189]: time="2025-10-13T22:00:00.645005092Z" level=info msg="starting plugins..."
	Oct 13 22:00:00 pause-253311 crio[2189]: time="2025-10-13T22:00:00.645020855Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Oct 13 22:00:00 pause-253311 crio[2189]: time="2025-10-13T22:00:00.645408956Z" level=info msg="No systemd watchdog enabled"
	Oct 13 22:00:00 pause-253311 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	72e4da29ff202       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   11 seconds ago      Running             coredns                   0                   ba3ba13152bbc       coredns-66bc5c9577-p7jvh               kube-system
	2631ab18f5640       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   23 seconds ago      Running             kindnet-cni               0                   7fbb3b6597482       kindnet-2htsm                          kube-system
	3ff5f4fa83e6b       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   23 seconds ago      Running             kube-proxy                0                   31d7bf9a01b58       kube-proxy-szdxg                       kube-system
	9f36be9b9e29a       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   33 seconds ago      Running             kube-scheduler            0                   fdaf225068962       kube-scheduler-pause-253311            kube-system
	72a7985084091       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   33 seconds ago      Running             kube-controller-manager   0                   d80b18870a882       kube-controller-manager-pause-253311   kube-system
	ae87bf0613601       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   33 seconds ago      Running             etcd                      0                   93c629b84d291       etcd-pause-253311                      kube-system
	ffd8882a9dc89       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   33 seconds ago      Running             kube-apiserver            0                   12ec819bc8058       kube-apiserver-pause-253311            kube-system
	
	
	==> coredns [72e4da29ff2020997167660dbf5e577efbcb79f4cf72f7406a5a3d592c3753d8] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:43540 - 58604 "HINFO IN 3834364473335471005.6906010466827287016. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.083871046s
	
	
	==> describe nodes <==
	Name:               pause-253311
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-253311
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=18273cc699fc357fbf4f93654efe4966698a9f22
	                    minikube.k8s.io/name=pause-253311
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_13T21_59_38_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 13 Oct 2025 21:59:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-253311
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 13 Oct 2025 21:59:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 13 Oct 2025 21:59:58 +0000   Mon, 13 Oct 2025 21:59:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 13 Oct 2025 21:59:58 +0000   Mon, 13 Oct 2025 21:59:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 13 Oct 2025 21:59:58 +0000   Mon, 13 Oct 2025 21:59:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 13 Oct 2025 21:59:58 +0000   Mon, 13 Oct 2025 21:59:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    pause-253311
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863444Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863444Ki
	  pods:               110
	System Info:
	  Machine ID:                 66f4c9907daa2bafbf78246f68ed04ba
	  System UUID:                f674c289-81cb-419b-aeb1-181b0f68b580
	  Boot ID:                    981b04c4-08c1-4321-af44-0fa889f5f1d8
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-p7jvh                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     24s
	  kube-system                 etcd-pause-253311                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         30s
	  kube-system                 kindnet-2htsm                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      24s
	  kube-system                 kube-apiserver-pause-253311             250m (3%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-controller-manager-pause-253311    200m (2%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-proxy-szdxg                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	  kube-system                 kube-scheduler-pause-253311             100m (1%)     0 (0%)      0 (0%)           0 (0%)         30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 22s   kube-proxy       
	  Normal  Starting                 30s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  30s   kubelet          Node pause-253311 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30s   kubelet          Node pause-253311 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30s   kubelet          Node pause-253311 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           25s   node-controller  Node pause-253311 event: Registered Node pause-253311 in Controller
	  Normal  NodeReady                13s   kubelet          Node pause-253311 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.099627] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.027042] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.612816] kauditd_printk_skb: 47 callbacks suppressed
	[Oct13 21:21] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +1.026013] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +1.023870] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +1.023931] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +1.023844] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +1.023883] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +2.047734] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +4.031606] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +8.511203] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[ +16.382178] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[Oct13 21:22] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	
	
	==> etcd [ae87bf0613601bc98b12e73093e02e301d4f94c0099edc7e8d9a3dfc637ea701] <==
	{"level":"warn","ts":"2025-10-13T21:59:34.578727Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:59:34.585299Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:59:34.591756Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:59:34.598475Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57318","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:59:34.605470Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:59:34.612842Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:59:34.620425Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:59:34.628688Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:59:34.636541Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:59:34.643225Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:59:34.649785Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:59:34.656715Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:59:34.665717Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:59:34.672395Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:59:34.679132Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:59:34.686152Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:59:34.694175Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:59:34.700806Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:59:34.708415Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:59:34.715170Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57622","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:59:34.725954Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:59:34.732806Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:59:34.740110Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:59:34.789385Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:59:52.105818Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"130.22059ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571765466836980909 > lease_revoke:<id:5b3399df95ff52fb>","response":"size:29"}
	
	
	==> kernel <==
	 22:00:07 up  1:42,  0 user,  load average: 5.49, 2.84, 6.34
	Linux pause-253311 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [2631ab18f564077ad577f6e26dabadd22f99644287d0fc7fd642f0231e2fb504] <==
	I1013 21:59:44.217488       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1013 21:59:44.217735       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1013 21:59:44.217862       1 main.go:148] setting mtu 1500 for CNI 
	I1013 21:59:44.217875       1 main.go:178] kindnetd IP family: "ipv4"
	I1013 21:59:44.217892       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-13T21:59:44Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1013 21:59:44.418518       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1013 21:59:44.511588       1 controller.go:381] "Waiting for informer caches to sync"
	I1013 21:59:44.511963       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1013 21:59:44.512125       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1013 21:59:44.712161       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1013 21:59:44.712261       1 metrics.go:72] Registering metrics
	I1013 21:59:44.712397       1 controller.go:711] "Syncing nftables rules"
	I1013 21:59:54.423116       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1013 21:59:54.423168       1 main.go:301] handling current node
	I1013 22:00:04.422077       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1013 22:00:04.422106       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ffd8882a9dc8994ddf3be9937257ffa6da74907717ea8751def522f9a2473b89] <==
	I1013 21:59:35.323844       1 policy_source.go:240] refreshing policies
	E1013 21:59:35.335517       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1013 21:59:35.383770       1 controller.go:667] quota admission added evaluator for: namespaces
	I1013 21:59:35.388677       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1013 21:59:35.388770       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1013 21:59:35.393210       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1013 21:59:35.393387       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1013 21:59:35.514787       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1013 21:59:36.219004       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1013 21:59:36.222865       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1013 21:59:36.222885       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1013 21:59:36.700443       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1013 21:59:36.739880       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1013 21:59:36.789823       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1013 21:59:36.800124       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1013 21:59:36.801190       1 controller.go:667] quota admission added evaluator for: endpoints
	I1013 21:59:36.805580       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1013 21:59:37.487361       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1013 21:59:37.807244       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1013 21:59:37.816962       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1013 21:59:37.824474       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1013 21:59:43.186130       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1013 21:59:43.191516       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1013 21:59:43.234668       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1013 21:59:43.534114       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [72a7985084091f5ff6e2d6afc41a7b849fe6f6bb0d7e00caad660ce2d8be6fae] <==
	I1013 21:59:42.480583       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1013 21:59:42.480595       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 21:59:42.480606       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1013 21:59:42.480609       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1013 21:59:42.480621       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1013 21:59:42.480778       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1013 21:59:42.481501       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1013 21:59:42.481521       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1013 21:59:42.481538       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1013 21:59:42.481577       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1013 21:59:42.481618       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1013 21:59:42.481737       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1013 21:59:42.482022       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1013 21:59:42.482098       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1013 21:59:42.482125       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1013 21:59:42.482240       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-253311"
	I1013 21:59:42.482288       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1013 21:59:42.483665       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1013 21:59:42.483702       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1013 21:59:42.484618       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1013 21:59:42.486755       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1013 21:59:42.487966       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1013 21:59:42.494049       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1013 21:59:42.503615       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 21:59:57.504339       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [3ff5f4fa83e6bdb9765489cb27d134699de2c145759b39c1ebb50b69637599e4] <==
	I1013 21:59:44.127120       1 server_linux.go:53] "Using iptables proxy"
	I1013 21:59:44.185331       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1013 21:59:44.286079       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1013 21:59:44.286125       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1013 21:59:44.286258       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1013 21:59:44.308683       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1013 21:59:44.308745       1 server_linux.go:132] "Using iptables Proxier"
	I1013 21:59:44.314488       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1013 21:59:44.315114       1 server.go:527] "Version info" version="v1.34.1"
	I1013 21:59:44.315152       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 21:59:44.316843       1 config.go:403] "Starting serviceCIDR config controller"
	I1013 21:59:44.317516       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1013 21:59:44.317244       1 config.go:200] "Starting service config controller"
	I1013 21:59:44.317399       1 config.go:106] "Starting endpoint slice config controller"
	I1013 21:59:44.317562       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1013 21:59:44.317566       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1013 21:59:44.317612       1 config.go:309] "Starting node config controller"
	I1013 21:59:44.317632       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1013 21:59:44.317640       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1013 21:59:44.417735       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1013 21:59:44.417769       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1013 21:59:44.417781       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [9f36be9b9e29aac5bebbf06e4ae167def223bbd61742d5ff1d3ae7b5b075414d] <==
	E1013 21:59:35.251724       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1013 21:59:35.256923       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1013 21:59:35.257146       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1013 21:59:35.257273       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1013 21:59:35.257328       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1013 21:59:35.257404       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1013 21:59:35.257500       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1013 21:59:35.257589       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1013 21:59:35.257637       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1013 21:59:35.257330       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1013 21:59:35.258527       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1013 21:59:35.262758       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1013 21:59:36.110196       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1013 21:59:36.146820       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1013 21:59:36.153051       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1013 21:59:36.210146       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1013 21:59:36.232652       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1013 21:59:36.266116       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1013 21:59:36.318232       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1013 21:59:36.473281       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1013 21:59:36.501376       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1013 21:59:36.504444       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1013 21:59:36.512636       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1013 21:59:36.616513       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1013 21:59:39.235774       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 13 21:59:38 pause-253311 kubelet[1327]: E1013 21:59:38.705041    1327 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-pause-253311\" already exists" pod="kube-system/etcd-pause-253311"
	Oct 13 21:59:38 pause-253311 kubelet[1327]: I1013 21:59:38.735583    1327 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-pause-253311" podStartSLOduration=1.735560158 podStartE2EDuration="1.735560158s" podCreationTimestamp="2025-10-13 21:59:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 21:59:38.722756729 +0000 UTC m=+1.152279316" watchObservedRunningTime="2025-10-13 21:59:38.735560158 +0000 UTC m=+1.165082741"
	Oct 13 21:59:38 pause-253311 kubelet[1327]: I1013 21:59:38.752537    1327 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-pause-253311" podStartSLOduration=1.752482544 podStartE2EDuration="1.752482544s" podCreationTimestamp="2025-10-13 21:59:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 21:59:38.75071072 +0000 UTC m=+1.180233308" watchObservedRunningTime="2025-10-13 21:59:38.752482544 +0000 UTC m=+1.182005132"
	Oct 13 21:59:38 pause-253311 kubelet[1327]: I1013 21:59:38.752683    1327 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-pause-253311" podStartSLOduration=1.7526730160000001 podStartE2EDuration="1.752673016s" podCreationTimestamp="2025-10-13 21:59:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 21:59:38.735834329 +0000 UTC m=+1.165356913" watchObservedRunningTime="2025-10-13 21:59:38.752673016 +0000 UTC m=+1.182195603"
	Oct 13 21:59:38 pause-253311 kubelet[1327]: I1013 21:59:38.781227    1327 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-pause-253311" podStartSLOduration=1.781205326 podStartE2EDuration="1.781205326s" podCreationTimestamp="2025-10-13 21:59:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 21:59:38.768334776 +0000 UTC m=+1.197857367" watchObservedRunningTime="2025-10-13 21:59:38.781205326 +0000 UTC m=+1.210727915"
	Oct 13 21:59:42 pause-253311 kubelet[1327]: I1013 21:59:42.507616    1327 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 13 21:59:42 pause-253311 kubelet[1327]: I1013 21:59:42.508410    1327 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 13 21:59:43 pause-253311 kubelet[1327]: I1013 21:59:43.595618    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/b31b98f6-f12a-473d-9d04-be38b7c1ee1c-cni-cfg\") pod \"kindnet-2htsm\" (UID: \"b31b98f6-f12a-473d-9d04-be38b7c1ee1c\") " pod="kube-system/kindnet-2htsm"
	Oct 13 21:59:43 pause-253311 kubelet[1327]: I1013 21:59:43.595676    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a882d7c7-03ef-4810-a8d3-4358c1a75e9b-kube-proxy\") pod \"kube-proxy-szdxg\" (UID: \"a882d7c7-03ef-4810-a8d3-4358c1a75e9b\") " pod="kube-system/kube-proxy-szdxg"
	Oct 13 21:59:43 pause-253311 kubelet[1327]: I1013 21:59:43.595694    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a882d7c7-03ef-4810-a8d3-4358c1a75e9b-lib-modules\") pod \"kube-proxy-szdxg\" (UID: \"a882d7c7-03ef-4810-a8d3-4358c1a75e9b\") " pod="kube-system/kube-proxy-szdxg"
	Oct 13 21:59:43 pause-253311 kubelet[1327]: I1013 21:59:43.595710    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgjfr\" (UniqueName: \"kubernetes.io/projected/a882d7c7-03ef-4810-a8d3-4358c1a75e9b-kube-api-access-hgjfr\") pod \"kube-proxy-szdxg\" (UID: \"a882d7c7-03ef-4810-a8d3-4358c1a75e9b\") " pod="kube-system/kube-proxy-szdxg"
	Oct 13 21:59:43 pause-253311 kubelet[1327]: I1013 21:59:43.595727    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b31b98f6-f12a-473d-9d04-be38b7c1ee1c-xtables-lock\") pod \"kindnet-2htsm\" (UID: \"b31b98f6-f12a-473d-9d04-be38b7c1ee1c\") " pod="kube-system/kindnet-2htsm"
	Oct 13 21:59:43 pause-253311 kubelet[1327]: I1013 21:59:43.595741    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwn2w\" (UniqueName: \"kubernetes.io/projected/b31b98f6-f12a-473d-9d04-be38b7c1ee1c-kube-api-access-qwn2w\") pod \"kindnet-2htsm\" (UID: \"b31b98f6-f12a-473d-9d04-be38b7c1ee1c\") " pod="kube-system/kindnet-2htsm"
	Oct 13 21:59:43 pause-253311 kubelet[1327]: I1013 21:59:43.595791    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a882d7c7-03ef-4810-a8d3-4358c1a75e9b-xtables-lock\") pod \"kube-proxy-szdxg\" (UID: \"a882d7c7-03ef-4810-a8d3-4358c1a75e9b\") " pod="kube-system/kube-proxy-szdxg"
	Oct 13 21:59:43 pause-253311 kubelet[1327]: I1013 21:59:43.595843    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b31b98f6-f12a-473d-9d04-be38b7c1ee1c-lib-modules\") pod \"kindnet-2htsm\" (UID: \"b31b98f6-f12a-473d-9d04-be38b7c1ee1c\") " pod="kube-system/kindnet-2htsm"
	Oct 13 21:59:44 pause-253311 kubelet[1327]: I1013 21:59:44.716266    1327 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-szdxg" podStartSLOduration=1.716247452 podStartE2EDuration="1.716247452s" podCreationTimestamp="2025-10-13 21:59:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 21:59:44.716119866 +0000 UTC m=+7.145642454" watchObservedRunningTime="2025-10-13 21:59:44.716247452 +0000 UTC m=+7.145770040"
	Oct 13 21:59:44 pause-253311 kubelet[1327]: I1013 21:59:44.847423    1327 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-2htsm" podStartSLOduration=1.84739845 podStartE2EDuration="1.84739845s" podCreationTimestamp="2025-10-13 21:59:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 21:59:44.731176764 +0000 UTC m=+7.160699352" watchObservedRunningTime="2025-10-13 21:59:44.84739845 +0000 UTC m=+7.276921043"
	Oct 13 21:59:54 pause-253311 kubelet[1327]: I1013 21:59:54.999289    1327 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 13 21:59:55 pause-253311 kubelet[1327]: I1013 21:59:55.086859    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z798p\" (UniqueName: \"kubernetes.io/projected/93b118c8-6a99-4f2e-be68-cd05c9c12326-kube-api-access-z798p\") pod \"coredns-66bc5c9577-p7jvh\" (UID: \"93b118c8-6a99-4f2e-be68-cd05c9c12326\") " pod="kube-system/coredns-66bc5c9577-p7jvh"
	Oct 13 21:59:55 pause-253311 kubelet[1327]: I1013 21:59:55.087140    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/93b118c8-6a99-4f2e-be68-cd05c9c12326-config-volume\") pod \"coredns-66bc5c9577-p7jvh\" (UID: \"93b118c8-6a99-4f2e-be68-cd05c9c12326\") " pod="kube-system/coredns-66bc5c9577-p7jvh"
	Oct 13 21:59:55 pause-253311 kubelet[1327]: I1013 21:59:55.743135    1327 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-p7jvh" podStartSLOduration=12.743112287 podStartE2EDuration="12.743112287s" podCreationTimestamp="2025-10-13 21:59:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 21:59:55.74308381 +0000 UTC m=+18.172606398" watchObservedRunningTime="2025-10-13 21:59:55.743112287 +0000 UTC m=+18.172634875"
	Oct 13 22:00:04 pause-253311 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 13 22:00:04 pause-253311 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 13 22:00:04 pause-253311 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 13 22:00:04 pause-253311 systemd[1]: kubelet.service: Consumed 1.263s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-253311 -n pause-253311
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-253311 -n pause-253311: exit status 2 (332.508923ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-253311 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-253311
helpers_test.go:243: (dbg) docker inspect pause-253311:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4e4d367348aca20f97928fd20719201b64a649387db18ea88ca714397866d296",
	        "Created": "2025-10-13T21:59:20.119651023Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 421609,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-13T21:59:20.179825386Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:496f866fde942b6f85dc3d23428f27ed11a5cd69b522133c43b8dc97e7575c9e",
	        "ResolvConfPath": "/var/lib/docker/containers/4e4d367348aca20f97928fd20719201b64a649387db18ea88ca714397866d296/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4e4d367348aca20f97928fd20719201b64a649387db18ea88ca714397866d296/hostname",
	        "HostsPath": "/var/lib/docker/containers/4e4d367348aca20f97928fd20719201b64a649387db18ea88ca714397866d296/hosts",
	        "LogPath": "/var/lib/docker/containers/4e4d367348aca20f97928fd20719201b64a649387db18ea88ca714397866d296/4e4d367348aca20f97928fd20719201b64a649387db18ea88ca714397866d296-json.log",
	        "Name": "/pause-253311",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-253311:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-253311",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4e4d367348aca20f97928fd20719201b64a649387db18ea88ca714397866d296",
	                "LowerDir": "/var/lib/docker/overlay2/4bb504327f68cecb30b0e80936621122e0430712a25881270dc23f11c14a8077-init/diff:/var/lib/docker/overlay2/d6236b573fee274727d414fe7dfb0718c2f1a4b8ebed995b4196b3231a8d31a4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4bb504327f68cecb30b0e80936621122e0430712a25881270dc23f11c14a8077/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4bb504327f68cecb30b0e80936621122e0430712a25881270dc23f11c14a8077/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4bb504327f68cecb30b0e80936621122e0430712a25881270dc23f11c14a8077/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-253311",
	                "Source": "/var/lib/docker/volumes/pause-253311/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-253311",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-253311",
	                "name.minikube.sigs.k8s.io": "pause-253311",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e606821829bc6a7d95ff9e2de798390f4d2d906d84b003d3b2e49fb53c64606a",
	            "SandboxKey": "/var/run/docker/netns/e606821829bc",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33013"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33014"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33017"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33015"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33016"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-253311": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5e:be:17:7b:37:b5",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "73e06fdbd50b8913868550c8d723017ebbbe250249a86c952e48794e194a630d",
	                    "EndpointID": "c73eb1aa2ee4ebb65485cd884f29562661c1a6ad9251274ca217dc361adb0bb5",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-253311",
	                        "4e4d367348ac"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-253311 -n pause-253311
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-253311 -n pause-253311: exit status 2 (379.360243ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-253311 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-253311 logs -n 25: (1.00992812s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p scheduled-stop-408315                                                                                                                 │ scheduled-stop-408315       │ jenkins │ v1.37.0 │ 13 Oct 25 21:57 UTC │ 13 Oct 25 21:57 UTC │
	│ start   │ -p insufficient-storage-240381 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio                         │ insufficient-storage-240381 │ jenkins │ v1.37.0 │ 13 Oct 25 21:57 UTC │                     │
	│ delete  │ -p insufficient-storage-240381                                                                                                           │ insufficient-storage-240381 │ jenkins │ v1.37.0 │ 13 Oct 25 21:57 UTC │ 13 Oct 25 21:57 UTC │
	│ start   │ -p kubernetes-upgrade-050146 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-050146   │ jenkins │ v1.37.0 │ 13 Oct 25 21:57 UTC │ 13 Oct 25 21:58 UTC │
	│ start   │ -p offline-crio-932435 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio                        │ offline-crio-932435         │ jenkins │ v1.37.0 │ 13 Oct 25 21:57 UTC │ 13 Oct 25 21:58 UTC │
	│ start   │ -p missing-upgrade-878493 --memory=3072 --driver=docker  --container-runtime=crio                                                        │ missing-upgrade-878493      │ jenkins │ v1.32.0 │ 13 Oct 25 21:57 UTC │ 13 Oct 25 21:58 UTC │
	│ start   │ -p stopped-upgrade-126916 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-126916      │ jenkins │ v1.32.0 │ 13 Oct 25 21:57 UTC │ 13 Oct 25 21:58 UTC │
	│ stop    │ -p kubernetes-upgrade-050146                                                                                                             │ kubernetes-upgrade-050146   │ jenkins │ v1.37.0 │ 13 Oct 25 21:58 UTC │ 13 Oct 25 21:58 UTC │
	│ stop    │ stopped-upgrade-126916 stop                                                                                                              │ stopped-upgrade-126916      │ jenkins │ v1.32.0 │ 13 Oct 25 21:58 UTC │ 13 Oct 25 21:58 UTC │
	│ start   │ -p kubernetes-upgrade-050146 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-050146   │ jenkins │ v1.37.0 │ 13 Oct 25 21:58 UTC │                     │
	│ start   │ -p missing-upgrade-878493 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-878493      │ jenkins │ v1.37.0 │ 13 Oct 25 21:58 UTC │ 13 Oct 25 21:59 UTC │
	│ start   │ -p stopped-upgrade-126916 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ stopped-upgrade-126916      │ jenkins │ v1.37.0 │ 13 Oct 25 21:58 UTC │ 13 Oct 25 21:59 UTC │
	│ delete  │ -p offline-crio-932435                                                                                                                   │ offline-crio-932435         │ jenkins │ v1.37.0 │ 13 Oct 25 21:58 UTC │ 13 Oct 25 21:58 UTC │
	│ start   │ -p running-upgrade-850760 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ running-upgrade-850760      │ jenkins │ v1.32.0 │ 13 Oct 25 21:58 UTC │ 13 Oct 25 21:59 UTC │
	│ delete  │ -p stopped-upgrade-126916                                                                                                                │ stopped-upgrade-126916      │ jenkins │ v1.37.0 │ 13 Oct 25 21:59 UTC │ 13 Oct 25 21:59 UTC │
	│ start   │ -p pause-253311 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-253311                │ jenkins │ v1.37.0 │ 13 Oct 25 21:59 UTC │ 13 Oct 25 21:59 UTC │
	│ start   │ -p running-upgrade-850760 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ running-upgrade-850760      │ jenkins │ v1.37.0 │ 13 Oct 25 21:59 UTC │ 13 Oct 25 21:59 UTC │
	│ delete  │ -p missing-upgrade-878493                                                                                                                │ missing-upgrade-878493      │ jenkins │ v1.37.0 │ 13 Oct 25 21:59 UTC │ 13 Oct 25 21:59 UTC │
	│ start   │ -p NoKubernetes-686990 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                            │ NoKubernetes-686990         │ jenkins │ v1.37.0 │ 13 Oct 25 21:59 UTC │                     │
	│ start   │ -p NoKubernetes-686990 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                    │ NoKubernetes-686990         │ jenkins │ v1.37.0 │ 13 Oct 25 21:59 UTC │ 13 Oct 25 21:59 UTC │
	│ delete  │ -p running-upgrade-850760                                                                                                                │ running-upgrade-850760      │ jenkins │ v1.37.0 │ 13 Oct 25 21:59 UTC │ 13 Oct 25 21:59 UTC │
	│ start   │ -p force-systemd-flag-886102 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio              │ force-systemd-flag-886102   │ jenkins │ v1.37.0 │ 13 Oct 25 21:59 UTC │                     │
	│ start   │ -p NoKubernetes-686990 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-686990         │ jenkins │ v1.37.0 │ 13 Oct 25 21:59 UTC │                     │
	│ start   │ -p pause-253311 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-253311                │ jenkins │ v1.37.0 │ 13 Oct 25 21:59 UTC │ 13 Oct 25 22:00 UTC │
	│ pause   │ -p pause-253311 --alsologtostderr -v=5                                                                                                   │ pause-253311                │ jenkins │ v1.37.0 │ 13 Oct 25 22:00 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 21:59:57
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 21:59:57.779758  431667 out.go:360] Setting OutFile to fd 1 ...
	I1013 21:59:57.779851  431667 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:59:57.779856  431667 out.go:374] Setting ErrFile to fd 2...
	I1013 21:59:57.779860  431667 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:59:57.780112  431667 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-226873/.minikube/bin
	I1013 21:59:57.780643  431667 out.go:368] Setting JSON to false
	I1013 21:59:57.781922  431667 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6146,"bootTime":1760386652,"procs":463,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1013 21:59:57.782066  431667 start.go:141] virtualization: kvm guest
	I1013 21:59:57.785126  431667 out.go:179] * [pause-253311] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1013 21:59:57.786911  431667 notify.go:220] Checking for updates...
	I1013 21:59:57.786945  431667 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 21:59:57.788219  431667 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 21:59:57.789390  431667 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-226873/kubeconfig
	I1013 21:59:57.790659  431667 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-226873/.minikube
	I1013 21:59:57.791957  431667 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1013 21:59:57.793241  431667 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 21:59:56.942599  428671 cli_runner.go:164] Run: docker network inspect force-systemd-flag-886102 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 21:59:56.960528  428671 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1013 21:59:56.965015  428671 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 21:59:56.975948  428671 kubeadm.go:883] updating cluster {Name:force-systemd-flag-886102 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-886102 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1013 21:59:56.976131  428671 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 21:59:56.976209  428671 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 21:59:57.009759  428671 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 21:59:57.009780  428671 crio.go:433] Images already preloaded, skipping extraction
	I1013 21:59:57.009823  428671 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 21:59:57.036665  428671 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 21:59:57.036690  428671 cache_images.go:85] Images are preloaded, skipping loading
	I1013 21:59:57.036699  428671 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1013 21:59:57.036803  428671 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=force-systemd-flag-886102 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-886102 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1013 21:59:57.036875  428671 ssh_runner.go:195] Run: crio config
	I1013 21:59:57.086371  428671 cni.go:84] Creating CNI manager for ""
	I1013 21:59:57.086403  428671 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 21:59:57.086428  428671 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1013 21:59:57.086459  428671 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-flag-886102 NodeName:force-systemd-flag-886102 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1013 21:59:57.086628  428671 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "force-systemd-flag-886102"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1013 21:59:57.086707  428671 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1013 21:59:57.095364  428671 binaries.go:44] Found k8s binaries, skipping transfer
	I1013 21:59:57.095451  428671 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1013 21:59:57.103869  428671 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1013 21:59:57.117020  428671 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1013 21:59:57.133339  428671 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1013 21:59:57.146906  428671 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1013 21:59:57.150778  428671 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 21:59:57.161065  428671 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 21:59:57.245766  428671 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 21:59:57.272556  428671 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/force-systemd-flag-886102 for IP: 192.168.85.2
	I1013 21:59:57.272579  428671 certs.go:195] generating shared ca certs ...
	I1013 21:59:57.272595  428671 certs.go:227] acquiring lock for ca certs: {Name:mk5abdb742abbab05bc35d961f97579f54806d56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 21:59:57.272740  428671 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-226873/.minikube/ca.key
	I1013 21:59:57.272781  428671 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-226873/.minikube/proxy-client-ca.key
	I1013 21:59:57.272791  428671 certs.go:257] generating profile certs ...
	I1013 21:59:57.272846  428671 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/force-systemd-flag-886102/client.key
	I1013 21:59:57.272868  428671 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/force-systemd-flag-886102/client.crt with IP's: []
	I1013 21:59:57.354345  428671 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/force-systemd-flag-886102/client.crt ...
	I1013 21:59:57.354373  428671 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/force-systemd-flag-886102/client.crt: {Name:mk5f46a4eababedf56975f07072e9db31e052433 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 21:59:57.354541  428671 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/force-systemd-flag-886102/client.key ...
	I1013 21:59:57.354555  428671 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/force-systemd-flag-886102/client.key: {Name:mkbab824799bbb21f8a90267b7f855810b009e25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 21:59:57.354635  428671 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/force-systemd-flag-886102/apiserver.key.d940acbf
	I1013 21:59:57.354650  428671 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/force-systemd-flag-886102/apiserver.crt.d940acbf with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1013 21:59:57.590642  428671 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/force-systemd-flag-886102/apiserver.crt.d940acbf ...
	I1013 21:59:57.590671  428671 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/force-systemd-flag-886102/apiserver.crt.d940acbf: {Name:mk861c343d1992e080faf57ac9c9a2beac9b2fe6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 21:59:57.590885  428671 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/force-systemd-flag-886102/apiserver.key.d940acbf ...
	I1013 21:59:57.590905  428671 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/force-systemd-flag-886102/apiserver.key.d940acbf: {Name:mk58a307f517071b0add4abf7bd64070fdfe78f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 21:59:57.591036  428671 certs.go:382] copying /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/force-systemd-flag-886102/apiserver.crt.d940acbf -> /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/force-systemd-flag-886102/apiserver.crt
	I1013 21:59:57.591124  428671 certs.go:386] copying /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/force-systemd-flag-886102/apiserver.key.d940acbf -> /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/force-systemd-flag-886102/apiserver.key
	I1013 21:59:57.591180  428671 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/force-systemd-flag-886102/proxy-client.key
	I1013 21:59:57.591196  428671 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/force-systemd-flag-886102/proxy-client.crt with IP's: []
	I1013 21:59:57.795211  431667 config.go:182] Loaded profile config "pause-253311": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:59:57.795887  431667 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 21:59:57.823607  431667 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1013 21:59:57.823732  431667 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 21:59:57.894702  431667 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:80 OomKillDisable:false NGoroutines:87 SystemTime:2025-10-13 21:59:57.883471152 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652166656 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1013 21:59:57.894824  431667 docker.go:318] overlay module found
	I1013 21:59:57.896644  431667 out.go:179] * Using the docker driver based on existing profile
	I1013 21:59:57.898111  431667 start.go:305] selected driver: docker
	I1013 21:59:57.898129  431667 start.go:925] validating driver "docker" against &{Name:pause-253311 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-253311 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 21:59:57.898282  431667 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 21:59:57.898397  431667 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 21:59:57.971022  431667 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:80 OomKillDisable:false NGoroutines:87 SystemTime:2025-10-13 21:59:57.958915343 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652166656 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1013 21:59:57.971903  431667 cni.go:84] Creating CNI manager for ""
	I1013 21:59:57.971981  431667 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 21:59:57.972062  431667 start.go:349] cluster config:
	{Name:pause-253311 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-253311 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 21:59:57.974365  431667 out.go:179] * Starting "pause-253311" primary control-plane node in "pause-253311" cluster
	I1013 21:59:57.975751  431667 cache.go:123] Beginning downloading kic base image for docker with crio
	I1013 21:59:57.977605  431667 out.go:179] * Pulling base image v0.0.48-1760363564-21724 ...
	I1013 21:59:57.978891  431667 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 21:59:57.978941  431667 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon
	I1013 21:59:57.978955  431667 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-226873/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1013 21:59:57.978970  431667 cache.go:58] Caching tarball of preloaded images
	I1013 21:59:57.979109  431667 preload.go:233] Found /home/jenkins/minikube-integration/21724-226873/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1013 21:59:57.979125  431667 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1013 21:59:57.979271  431667 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/pause-253311/config.json ...
	I1013 21:59:58.004394  431667 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon, skipping pull
	I1013 21:59:58.004415  431667 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 exists in daemon, skipping load
	I1013 21:59:58.004433  431667 cache.go:232] Successfully downloaded all kic artifacts
	I1013 21:59:58.004463  431667 start.go:360] acquireMachinesLock for pause-253311: {Name:mk6b04fa29f2bc336f4d43e7e5f3cdef893fa6fe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 21:59:58.004526  431667 start.go:364] duration metric: took 39.034µs to acquireMachinesLock for "pause-253311"
	I1013 21:59:58.004547  431667 start.go:96] Skipping create...Using existing machine configuration
	I1013 21:59:58.004556  431667 fix.go:54] fixHost starting: 
	I1013 21:59:58.004814  431667 cli_runner.go:164] Run: docker container inspect pause-253311 --format={{.State.Status}}
	I1013 21:59:58.023481  431667 fix.go:112] recreateIfNeeded on pause-253311: state=Running err=<nil>
	W1013 21:59:58.023534  431667 fix.go:138] unexpected machine state, will restart: <nil>
	I1013 21:59:58.028537  431667 out.go:252] * Updating the running docker "pause-253311" container ...
	I1013 21:59:58.028581  431667 machine.go:93] provisionDockerMachine start ...
	I1013 21:59:58.028674  431667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-253311
	I1013 21:59:58.048585  431667 main.go:141] libmachine: Using SSH client type: native
	I1013 21:59:58.048822  431667 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33013 <nil> <nil>}
	I1013 21:59:58.048834  431667 main.go:141] libmachine: About to run SSH command:
	hostname
	I1013 21:59:58.189187  431667 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-253311
	
	I1013 21:59:58.189218  431667 ubuntu.go:182] provisioning hostname "pause-253311"
	I1013 21:59:58.189287  431667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-253311
	I1013 21:59:58.209303  431667 main.go:141] libmachine: Using SSH client type: native
	I1013 21:59:58.209553  431667 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33013 <nil> <nil>}
	I1013 21:59:58.209569  431667 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-253311 && echo "pause-253311" | sudo tee /etc/hostname
	I1013 21:59:58.361381  431667 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-253311
	
	I1013 21:59:58.361497  431667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-253311
	I1013 21:59:58.381361  431667 main.go:141] libmachine: Using SSH client type: native
	I1013 21:59:58.381608  431667 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33013 <nil> <nil>}
	I1013 21:59:58.381628  431667 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-253311' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-253311/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-253311' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1013 21:59:58.524946  431667 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 21:59:58.525004  431667 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21724-226873/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-226873/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-226873/.minikube}
	I1013 21:59:58.525030  431667 ubuntu.go:190] setting up certificates
	I1013 21:59:58.525043  431667 provision.go:84] configureAuth start
	I1013 21:59:58.525103  431667 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-253311
	I1013 21:59:58.545111  431667 provision.go:143] copyHostCerts
	I1013 21:59:58.545175  431667 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-226873/.minikube/ca.pem, removing ...
	I1013 21:59:58.545192  431667 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-226873/.minikube/ca.pem
	I1013 21:59:58.545263  431667 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-226873/.minikube/ca.pem (1078 bytes)
	I1013 21:59:58.545365  431667 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-226873/.minikube/cert.pem, removing ...
	I1013 21:59:58.545375  431667 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-226873/.minikube/cert.pem
	I1013 21:59:58.545401  431667 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-226873/.minikube/cert.pem (1123 bytes)
	I1013 21:59:58.545531  431667 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-226873/.minikube/key.pem, removing ...
	I1013 21:59:58.545542  431667 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-226873/.minikube/key.pem
	I1013 21:59:58.545566  431667 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-226873/.minikube/key.pem (1679 bytes)
	I1013 21:59:58.545632  431667 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-226873/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca-key.pem org=jenkins.pause-253311 san=[127.0.0.1 192.168.94.2 localhost minikube pause-253311]
	I1013 21:59:58.623400  431667 provision.go:177] copyRemoteCerts
	I1013 21:59:58.623471  431667 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1013 21:59:58.623511  431667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-253311
	I1013 21:59:58.645371  431667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33013 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/pause-253311/id_rsa Username:docker}
	I1013 21:59:58.749062  431667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1013 21:59:58.767265  431667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1013 21:59:58.785173  431667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1013 21:59:58.802874  431667 provision.go:87] duration metric: took 277.817212ms to configureAuth
	I1013 21:59:58.802901  431667 ubuntu.go:206] setting minikube options for container-runtime
	I1013 21:59:58.803176  431667 config.go:182] Loaded profile config "pause-253311": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:59:58.803298  431667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-253311
	I1013 21:59:58.821367  431667 main.go:141] libmachine: Using SSH client type: native
	I1013 21:59:58.821580  431667 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33013 <nil> <nil>}
	I1013 21:59:58.821597  431667 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1013 21:59:59.145842  431667 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1013 21:59:59.145871  431667 machine.go:96] duration metric: took 1.117280469s to provisionDockerMachine
	I1013 21:59:59.145886  431667 start.go:293] postStartSetup for "pause-253311" (driver="docker")
	I1013 21:59:59.145899  431667 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1013 21:59:59.145950  431667 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1013 21:59:59.146050  431667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-253311
	I1013 21:59:59.166667  431667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33013 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/pause-253311/id_rsa Username:docker}
	I1013 21:59:59.267059  431667 ssh_runner.go:195] Run: cat /etc/os-release
	I1013 21:59:59.271002  431667 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1013 21:59:59.271040  431667 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1013 21:59:59.271061  431667 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-226873/.minikube/addons for local assets ...
	I1013 21:59:59.271118  431667 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-226873/.minikube/files for local assets ...
	I1013 21:59:59.271221  431667 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-226873/.minikube/files/etc/ssl/certs/2309292.pem -> 2309292.pem in /etc/ssl/certs
	I1013 21:59:59.271347  431667 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1013 21:59:59.279310  431667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/files/etc/ssl/certs/2309292.pem --> /etc/ssl/certs/2309292.pem (1708 bytes)
	I1013 21:59:59.298174  431667 start.go:296] duration metric: took 152.268761ms for postStartSetup
	I1013 21:59:59.298256  431667 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1013 21:59:59.298317  431667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-253311
	I1013 21:59:59.317186  431667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33013 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/pause-253311/id_rsa Username:docker}
	I1013 21:59:59.414335  431667 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1013 21:59:59.420181  431667 fix.go:56] duration metric: took 1.415616609s for fixHost
	I1013 21:59:59.420210  431667 start.go:83] releasing machines lock for "pause-253311", held for 1.41567056s
	I1013 21:59:59.420304  431667 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-253311
	I1013 21:59:59.441689  431667 ssh_runner.go:195] Run: cat /version.json
	I1013 21:59:59.441755  431667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-253311
	I1013 21:59:59.441772  431667 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1013 21:59:59.441844  431667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-253311
	I1013 21:59:59.464616  431667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33013 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/pause-253311/id_rsa Username:docker}
	I1013 21:59:59.465752  431667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33013 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/pause-253311/id_rsa Username:docker}
	I1013 21:59:59.652736  431667 ssh_runner.go:195] Run: systemctl --version
	I1013 21:59:59.659765  431667 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1013 21:59:59.698790  431667 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1013 21:59:59.704075  431667 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1013 21:59:59.704143  431667 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1013 21:59:59.713473  431667 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1013 21:59:59.713506  431667 start.go:495] detecting cgroup driver to use...
	I1013 21:59:59.713542  431667 detect.go:190] detected "systemd" cgroup driver on host os
	I1013 21:59:59.713588  431667 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1013 21:59:59.730165  431667 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1013 21:59:59.744111  431667 docker.go:218] disabling cri-docker service (if available) ...
	I1013 21:59:59.744190  431667 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1013 21:59:59.759620  431667 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1013 21:59:59.773303  431667 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1013 21:59:59.886065  431667 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1013 22:00:00.002424  431667 docker.go:234] disabling docker service ...
	I1013 22:00:00.002515  431667 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1013 22:00:00.019799  431667 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1013 22:00:00.033056  431667 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1013 22:00:00.154708  431667 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1013 22:00:00.270508  431667 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1013 22:00:00.285164  431667 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1013 22:00:00.300485  431667 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1013 22:00:00.300548  431667 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:00:00.310040  431667 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1013 22:00:00.310114  431667 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:00:00.319595  431667 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:00:00.328525  431667 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:00:00.337860  431667 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1013 22:00:00.347051  431667 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:00:00.357767  431667 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:00:00.366795  431667 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:00:00.375949  431667 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1013 22:00:00.384019  431667 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1013 22:00:00.392139  431667 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:00:00.497654  431667 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1013 22:00:00.649283  431667 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1013 22:00:00.649364  431667 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1013 22:00:00.653935  431667 start.go:563] Will wait 60s for crictl version
	I1013 22:00:00.654025  431667 ssh_runner.go:195] Run: which crictl
	I1013 22:00:00.657801  431667 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1013 22:00:00.683871  431667 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1013 22:00:00.683959  431667 ssh_runner.go:195] Run: crio --version
	I1013 22:00:00.714926  431667 ssh_runner.go:195] Run: crio --version
	I1013 22:00:00.749949  431667 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1013 21:59:57.668098  410447 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1013 21:59:57.668516  410447 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1013 21:59:57.668565  410447 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1013 21:59:57.668628  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1013 21:59:57.705557  410447 cri.go:89] found id: "17016d7851618fa0d4027de50ae4987968e9ea6e570b0bdb1c29697f1e7b476c"
	I1013 21:59:57.705582  410447 cri.go:89] found id: ""
	I1013 21:59:57.705595  410447 logs.go:282] 1 containers: [17016d7851618fa0d4027de50ae4987968e9ea6e570b0bdb1c29697f1e7b476c]
	I1013 21:59:57.705657  410447 ssh_runner.go:195] Run: which crictl
	I1013 21:59:57.710731  410447 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1013 21:59:57.710798  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1013 21:59:57.744656  410447 cri.go:89] found id: ""
	I1013 21:59:57.744742  410447 logs.go:282] 0 containers: []
	W1013 21:59:57.744759  410447 logs.go:284] No container was found matching "etcd"
	I1013 21:59:57.744768  410447 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1013 21:59:57.744838  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1013 21:59:57.777004  410447 cri.go:89] found id: ""
	I1013 21:59:57.777034  410447 logs.go:282] 0 containers: []
	W1013 21:59:57.777045  410447 logs.go:284] No container was found matching "coredns"
	I1013 21:59:57.777052  410447 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1013 21:59:57.777109  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1013 21:59:57.810839  410447 cri.go:89] found id: "d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110"
	I1013 21:59:57.810867  410447 cri.go:89] found id: ""
	I1013 21:59:57.810879  410447 logs.go:282] 1 containers: [d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110]
	I1013 21:59:57.810957  410447 ssh_runner.go:195] Run: which crictl
	I1013 21:59:57.815734  410447 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1013 21:59:57.815849  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1013 21:59:57.850702  410447 cri.go:89] found id: ""
	I1013 21:59:57.850827  410447 logs.go:282] 0 containers: []
	W1013 21:59:57.850841  410447 logs.go:284] No container was found matching "kube-proxy"
	I1013 21:59:57.850850  410447 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1013 21:59:57.850928  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1013 21:59:57.887395  410447 cri.go:89] found id: "e977b2f297ecb2613f43d4990ec61bcabc490b9bd0c61ff71bbffd9249b63c27"
	I1013 21:59:57.887422  410447 cri.go:89] found id: ""
	I1013 21:59:57.887431  410447 logs.go:282] 1 containers: [e977b2f297ecb2613f43d4990ec61bcabc490b9bd0c61ff71bbffd9249b63c27]
	I1013 21:59:57.887487  410447 ssh_runner.go:195] Run: which crictl
	I1013 21:59:57.892145  410447 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1013 21:59:57.892233  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1013 21:59:57.928571  410447 cri.go:89] found id: ""
	I1013 21:59:57.928607  410447 logs.go:282] 0 containers: []
	W1013 21:59:57.928619  410447 logs.go:284] No container was found matching "kindnet"
	I1013 21:59:57.928629  410447 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1013 21:59:57.928691  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1013 21:59:57.966908  410447 cri.go:89] found id: ""
	I1013 21:59:57.966936  410447 logs.go:282] 0 containers: []
	W1013 21:59:57.966947  410447 logs.go:284] No container was found matching "storage-provisioner"
	I1013 21:59:57.966961  410447 logs.go:123] Gathering logs for kubelet ...
	I1013 21:59:57.966977  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1013 21:59:58.042738  410447 logs.go:123] Gathering logs for dmesg ...
	I1013 21:59:58.042774  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1013 21:59:58.058626  410447 logs.go:123] Gathering logs for describe nodes ...
	I1013 21:59:58.058654  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1013 21:59:58.122804  410447 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1013 21:59:58.122823  410447 logs.go:123] Gathering logs for kube-apiserver [17016d7851618fa0d4027de50ae4987968e9ea6e570b0bdb1c29697f1e7b476c] ...
	I1013 21:59:58.122841  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 17016d7851618fa0d4027de50ae4987968e9ea6e570b0bdb1c29697f1e7b476c"
	I1013 21:59:58.159010  410447 logs.go:123] Gathering logs for kube-scheduler [d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110] ...
	I1013 21:59:58.159046  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110"
	I1013 21:59:58.215545  410447 logs.go:123] Gathering logs for kube-controller-manager [e977b2f297ecb2613f43d4990ec61bcabc490b9bd0c61ff71bbffd9249b63c27] ...
	I1013 21:59:58.215589  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e977b2f297ecb2613f43d4990ec61bcabc490b9bd0c61ff71bbffd9249b63c27"
	I1013 21:59:58.245616  410447 logs.go:123] Gathering logs for CRI-O ...
	I1013 21:59:58.245640  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1013 21:59:58.290999  410447 logs.go:123] Gathering logs for container status ...
	I1013 21:59:58.291042  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1013 22:00:00.826058  410447 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1013 22:00:00.751048  431667 cli_runner.go:164] Run: docker network inspect pause-253311 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 22:00:00.769499  431667 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1013 22:00:00.774072  431667 kubeadm.go:883] updating cluster {Name:pause-253311 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-253311 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1013 22:00:00.774226  431667 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 22:00:00.774285  431667 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 22:00:00.809288  431667 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 22:00:00.809318  431667 crio.go:433] Images already preloaded, skipping extraction
	I1013 22:00:00.809374  431667 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 22:00:00.838177  431667 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 22:00:00.838202  431667 cache_images.go:85] Images are preloaded, skipping loading
	I1013 22:00:00.838212  431667 kubeadm.go:934] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1013 22:00:00.838345  431667 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-253311 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-253311 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1013 22:00:00.838438  431667 ssh_runner.go:195] Run: crio config
	I1013 22:00:00.889245  431667 cni.go:84] Creating CNI manager for ""
	I1013 22:00:00.889267  431667 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 22:00:00.889285  431667 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1013 22:00:00.889306  431667 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-253311 NodeName:pause-253311 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1013 22:00:00.889445  431667 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-253311"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1013 22:00:00.889514  431667 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1013 22:00:00.898363  431667 binaries.go:44] Found k8s binaries, skipping transfer
	I1013 22:00:00.898452  431667 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1013 22:00:00.906643  431667 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1013 22:00:00.920485  431667 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1013 22:00:00.933952  431667 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1013 22:00:00.948495  431667 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1013 22:00:00.953302  431667 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:00:01.061259  431667 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 22:00:01.075314  431667 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/pause-253311 for IP: 192.168.94.2
	I1013 22:00:01.075340  431667 certs.go:195] generating shared ca certs ...
	I1013 22:00:01.075361  431667 certs.go:227] acquiring lock for ca certs: {Name:mk5abdb742abbab05bc35d961f97579f54806d56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:00:01.075524  431667 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-226873/.minikube/ca.key
	I1013 22:00:01.075575  431667 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-226873/.minikube/proxy-client-ca.key
	I1013 22:00:01.075589  431667 certs.go:257] generating profile certs ...
	I1013 22:00:01.075685  431667 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/pause-253311/client.key
	I1013 22:00:01.075744  431667 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/pause-253311/apiserver.key.782ab978
	I1013 22:00:01.075800  431667 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/pause-253311/proxy-client.key
	I1013 22:00:01.075947  431667 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/230929.pem (1338 bytes)
	W1013 22:00:01.075986  431667 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-226873/.minikube/certs/230929_empty.pem, impossibly tiny 0 bytes
	I1013 22:00:01.076014  431667 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca-key.pem (1675 bytes)
	I1013 22:00:01.076045  431667 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem (1078 bytes)
	I1013 22:00:01.076085  431667 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/cert.pem (1123 bytes)
	I1013 22:00:01.076115  431667 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/key.pem (1679 bytes)
	I1013 22:00:01.076168  431667 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/files/etc/ssl/certs/2309292.pem (1708 bytes)
	I1013 22:00:01.076915  431667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1013 22:00:01.096525  431667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1013 22:00:01.114782  431667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1013 22:00:01.133074  431667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1013 22:00:01.152052  431667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/pause-253311/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1013 22:00:01.170814  431667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/pause-253311/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1013 22:00:01.189243  431667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/pause-253311/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1013 22:00:01.207124  431667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/pause-253311/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1013 22:00:01.226429  431667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/files/etc/ssl/certs/2309292.pem --> /usr/share/ca-certificates/2309292.pem (1708 bytes)
	I1013 22:00:01.245184  431667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1013 22:00:01.263655  431667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/certs/230929.pem --> /usr/share/ca-certificates/230929.pem (1338 bytes)
	I1013 22:00:01.282219  431667 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1013 22:00:01.295973  431667 ssh_runner.go:195] Run: openssl version
	I1013 22:00:01.302907  431667 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1013 22:00:01.312630  431667 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:00:01.316697  431667 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 13 21:18 /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:00:01.316757  431667 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:00:01.353622  431667 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1013 22:00:01.362800  431667 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/230929.pem && ln -fs /usr/share/ca-certificates/230929.pem /etc/ssl/certs/230929.pem"
	I1013 22:00:01.372327  431667 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/230929.pem
	I1013 22:00:01.376487  431667 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 13 21:24 /usr/share/ca-certificates/230929.pem
	I1013 22:00:01.376551  431667 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/230929.pem
	I1013 22:00:01.412416  431667 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/230929.pem /etc/ssl/certs/51391683.0"
	I1013 22:00:01.421515  431667 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2309292.pem && ln -fs /usr/share/ca-certificates/2309292.pem /etc/ssl/certs/2309292.pem"
	I1013 22:00:01.430805  431667 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2309292.pem
	I1013 22:00:01.434798  431667 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 13 21:24 /usr/share/ca-certificates/2309292.pem
	I1013 22:00:01.434860  431667 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2309292.pem
	I1013 22:00:01.470685  431667 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2309292.pem /etc/ssl/certs/3ec20f2e.0"
	I1013 22:00:01.479837  431667 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1013 22:00:01.484379  431667 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1013 22:00:01.520445  431667 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1013 22:00:01.557514  431667 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1013 22:00:01.594409  431667 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1013 22:00:01.629848  431667 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1013 22:00:01.668421  431667 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1013 22:00:01.712571  431667 kubeadm.go:400] StartCluster: {Name:pause-253311 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-253311 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 22:00:01.712748  431667 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 22:00:01.712820  431667 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 22:00:01.748389  431667 cri.go:89] found id: "72e4da29ff2020997167660dbf5e577efbcb79f4cf72f7406a5a3d592c3753d8"
	I1013 22:00:01.748416  431667 cri.go:89] found id: "2631ab18f564077ad577f6e26dabadd22f99644287d0fc7fd642f0231e2fb504"
	I1013 22:00:01.748423  431667 cri.go:89] found id: "3ff5f4fa83e6bdb9765489cb27d134699de2c145759b39c1ebb50b69637599e4"
	I1013 22:00:01.748428  431667 cri.go:89] found id: "9f36be9b9e29aac5bebbf06e4ae167def223bbd61742d5ff1d3ae7b5b075414d"
	I1013 22:00:01.748433  431667 cri.go:89] found id: "72a7985084091f5ff6e2d6afc41a7b849fe6f6bb0d7e00caad660ce2d8be6fae"
	I1013 22:00:01.748438  431667 cri.go:89] found id: "ae87bf0613601bc98b12e73093e02e301d4f94c0099edc7e8d9a3dfc637ea701"
	I1013 22:00:01.748463  431667 cri.go:89] found id: "ffd8882a9dc8994ddf3be9937257ffa6da74907717ea8751def522f9a2473b89"
	I1013 22:00:01.748476  431667 cri.go:89] found id: ""
	I1013 22:00:01.748527  431667 ssh_runner.go:195] Run: sudo runc list -f json
	W1013 22:00:01.761063  431667 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:00:01Z" level=error msg="open /run/runc: no such file or directory"
	I1013 22:00:01.761156  431667 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1013 22:00:01.769686  431667 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1013 22:00:01.769710  431667 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1013 22:00:01.769761  431667 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1013 22:00:01.777919  431667 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1013 22:00:01.778864  431667 kubeconfig.go:125] found "pause-253311" server: "https://192.168.94.2:8443"
	I1013 22:00:01.780193  431667 kapi.go:59] client config for pause-253311: &rest.Config{Host:"https://192.168.94.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21724-226873/.minikube/profiles/pause-253311/client.crt", KeyFile:"/home/jenkins/minikube-integration/21724-226873/.minikube/profiles/pause-253311/client.key", CAFile:"/home/jenkins/minikube-integration/21724-226873/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1013 22:00:01.780728  431667 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1013 22:00:01.780747  431667 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1013 22:00:01.780754  431667 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1013 22:00:01.780759  431667 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1013 22:00:01.780778  431667 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1013 22:00:01.781263  431667 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1013 22:00:01.789881  431667 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.94.2
	I1013 22:00:01.789916  431667 kubeadm.go:601] duration metric: took 20.199424ms to restartPrimaryControlPlane
	I1013 22:00:01.789928  431667 kubeadm.go:402] duration metric: took 77.371264ms to StartCluster
	I1013 22:00:01.789947  431667 settings.go:142] acquiring lock: {Name:mk13008e3b2fce0e368bddbf00d43b8340210d33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:00:01.790061  431667 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-226873/kubeconfig
	I1013 22:00:01.791544  431667 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/kubeconfig: {Name:mk2f336b13d09ff6e6da9e86905651541ce51ca0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:00:01.791759  431667 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1013 22:00:01.791845  431667 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1013 22:00:01.792109  431667 config.go:182] Loaded profile config "pause-253311": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:00:01.794238  431667 out.go:179] * Verifying Kubernetes components...
	I1013 22:00:01.794238  431667 out.go:179] * Enabled addons: 
	I1013 22:00:01.795413  431667 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:00:01.795410  431667 addons.go:514] duration metric: took 3.578896ms for enable addons: enabled=[]
	I1013 22:00:01.912981  431667 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 22:00:01.927630  431667 node_ready.go:35] waiting up to 6m0s for node "pause-253311" to be "Ready" ...
	I1013 22:00:01.936529  431667 node_ready.go:49] node "pause-253311" is "Ready"
	I1013 22:00:01.936559  431667 node_ready.go:38] duration metric: took 8.878515ms for node "pause-253311" to be "Ready" ...
	I1013 22:00:01.936577  431667 api_server.go:52] waiting for apiserver process to appear ...
	I1013 22:00:01.936637  431667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 22:00:01.949780  431667 api_server.go:72] duration metric: took 157.98836ms to wait for apiserver process to appear ...
	I1013 22:00:01.949812  431667 api_server.go:88] waiting for apiserver healthz status ...
	I1013 22:00:01.949836  431667 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1013 22:00:01.954261  431667 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1013 22:00:01.955340  431667 api_server.go:141] control plane version: v1.34.1
	I1013 22:00:01.955366  431667 api_server.go:131] duration metric: took 5.546556ms to wait for apiserver health ...
	I1013 22:00:01.955375  431667 system_pods.go:43] waiting for kube-system pods to appear ...
	I1013 22:00:01.958328  431667 system_pods.go:59] 7 kube-system pods found
	I1013 22:00:01.958352  431667 system_pods.go:61] "coredns-66bc5c9577-p7jvh" [93b118c8-6a99-4f2e-be68-cd05c9c12326] Running
	I1013 22:00:01.958357  431667 system_pods.go:61] "etcd-pause-253311" [9475b990-950e-45f5-a488-8a553ccd04ba] Running
	I1013 22:00:01.958361  431667 system_pods.go:61] "kindnet-2htsm" [b31b98f6-f12a-473d-9d04-be38b7c1ee1c] Running
	I1013 22:00:01.958365  431667 system_pods.go:61] "kube-apiserver-pause-253311" [3c26d989-c0a5-42cc-ae92-6cf32762ba2a] Running
	I1013 22:00:01.958369  431667 system_pods.go:61] "kube-controller-manager-pause-253311" [050c0665-1dab-4d73-9bd2-6edfd011e15e] Running
	I1013 22:00:01.958373  431667 system_pods.go:61] "kube-proxy-szdxg" [a882d7c7-03ef-4810-a8d3-4358c1a75e9b] Running
	I1013 22:00:01.958376  431667 system_pods.go:61] "kube-scheduler-pause-253311" [0e46eb0a-1eb7-4cd7-9e47-5cbe24c401fe] Running
	I1013 22:00:01.958381  431667 system_pods.go:74] duration metric: took 2.999479ms to wait for pod list to return data ...
	I1013 22:00:01.958389  431667 default_sa.go:34] waiting for default service account to be created ...
	I1013 22:00:01.960258  431667 default_sa.go:45] found service account: "default"
	I1013 22:00:01.960275  431667 default_sa.go:55] duration metric: took 1.880243ms for default service account to be created ...
	I1013 22:00:01.960283  431667 system_pods.go:116] waiting for k8s-apps to be running ...
	I1013 22:00:01.962973  431667 system_pods.go:86] 7 kube-system pods found
	I1013 22:00:01.963027  431667 system_pods.go:89] "coredns-66bc5c9577-p7jvh" [93b118c8-6a99-4f2e-be68-cd05c9c12326] Running
	I1013 22:00:01.963035  431667 system_pods.go:89] "etcd-pause-253311" [9475b990-950e-45f5-a488-8a553ccd04ba] Running
	I1013 22:00:01.963051  431667 system_pods.go:89] "kindnet-2htsm" [b31b98f6-f12a-473d-9d04-be38b7c1ee1c] Running
	I1013 22:00:01.963059  431667 system_pods.go:89] "kube-apiserver-pause-253311" [3c26d989-c0a5-42cc-ae92-6cf32762ba2a] Running
	I1013 22:00:01.963065  431667 system_pods.go:89] "kube-controller-manager-pause-253311" [050c0665-1dab-4d73-9bd2-6edfd011e15e] Running
	I1013 22:00:01.963076  431667 system_pods.go:89] "kube-proxy-szdxg" [a882d7c7-03ef-4810-a8d3-4358c1a75e9b] Running
	I1013 22:00:01.963082  431667 system_pods.go:89] "kube-scheduler-pause-253311" [0e46eb0a-1eb7-4cd7-9e47-5cbe24c401fe] Running
	I1013 22:00:01.963094  431667 system_pods.go:126] duration metric: took 2.804702ms to wait for k8s-apps to be running ...
	I1013 22:00:01.963106  431667 system_svc.go:44] waiting for kubelet service to be running ....
	I1013 22:00:01.963156  431667 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 22:00:01.977208  431667 system_svc.go:56] duration metric: took 14.091815ms WaitForService to wait for kubelet
	I1013 22:00:01.977237  431667 kubeadm.go:586] duration metric: took 185.451932ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 22:00:01.977253  431667 node_conditions.go:102] verifying NodePressure condition ...
	I1013 22:00:01.979803  431667 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1013 22:00:01.979833  431667 node_conditions.go:123] node cpu capacity is 8
	I1013 22:00:01.979848  431667 node_conditions.go:105] duration metric: took 2.589128ms to run NodePressure ...
	I1013 22:00:01.979864  431667 start.go:241] waiting for startup goroutines ...
	I1013 22:00:01.979874  431667 start.go:246] waiting for cluster config update ...
	I1013 22:00:01.979885  431667 start.go:255] writing updated cluster config ...
	I1013 22:00:01.980258  431667 ssh_runner.go:195] Run: rm -f paused
	I1013 22:00:01.984137  431667 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 22:00:01.984810  431667 kapi.go:59] client config for pause-253311: &rest.Config{Host:"https://192.168.94.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21724-226873/.minikube/profiles/pause-253311/client.crt", KeyFile:"/home/jenkins/minikube-integration/21724-226873/.minikube/profiles/pause-253311/client.key", CAFile:"/home/jenkins/minikube-integration/21724-226873/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1013 22:00:01.987328  431667 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-p7jvh" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:00:01.991198  431667 pod_ready.go:94] pod "coredns-66bc5c9577-p7jvh" is "Ready"
	I1013 22:00:01.991218  431667 pod_ready.go:86] duration metric: took 3.865654ms for pod "coredns-66bc5c9577-p7jvh" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:00:01.993076  431667 pod_ready.go:83] waiting for pod "etcd-pause-253311" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:00:01.996780  431667 pod_ready.go:94] pod "etcd-pause-253311" is "Ready"
	I1013 22:00:01.996802  431667 pod_ready.go:86] duration metric: took 3.70823ms for pod "etcd-pause-253311" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:00:01.998609  431667 pod_ready.go:83] waiting for pod "kube-apiserver-pause-253311" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:00:02.002082  431667 pod_ready.go:94] pod "kube-apiserver-pause-253311" is "Ready"
	I1013 22:00:02.002100  431667 pod_ready.go:86] duration metric: took 3.471975ms for pod "kube-apiserver-pause-253311" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:00:02.003840  431667 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-253311" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:00:02.388058  431667 pod_ready.go:94] pod "kube-controller-manager-pause-253311" is "Ready"
	I1013 22:00:02.388090  431667 pod_ready.go:86] duration metric: took 384.225894ms for pod "kube-controller-manager-pause-253311" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:00:02.588245  431667 pod_ready.go:83] waiting for pod "kube-proxy-szdxg" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 21:59:57.906128  428671 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/force-systemd-flag-886102/proxy-client.crt ...
	I1013 21:59:57.906159  428671 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/force-systemd-flag-886102/proxy-client.crt: {Name:mk4acbd803a34212e4c203dca741a315003adeb1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 21:59:57.906356  428671 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/force-systemd-flag-886102/proxy-client.key ...
	I1013 21:59:57.906375  428671 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/force-systemd-flag-886102/proxy-client.key: {Name:mke6d973df5ec7a918e960b9689f0228a6e96ba7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 21:59:57.906486  428671 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21724-226873/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1013 21:59:57.906508  428671 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21724-226873/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1013 21:59:57.906519  428671 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21724-226873/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1013 21:59:57.906533  428671 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21724-226873/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1013 21:59:57.906559  428671 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/force-systemd-flag-886102/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1013 21:59:57.906573  428671 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/force-systemd-flag-886102/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1013 21:59:57.906588  428671 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/force-systemd-flag-886102/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1013 21:59:57.906608  428671 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/force-systemd-flag-886102/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1013 21:59:57.906679  428671 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/230929.pem (1338 bytes)
	W1013 21:59:57.906738  428671 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-226873/.minikube/certs/230929_empty.pem, impossibly tiny 0 bytes
	I1013 21:59:57.906752  428671 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca-key.pem (1675 bytes)
	I1013 21:59:57.906781  428671 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem (1078 bytes)
	I1013 21:59:57.906814  428671 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/cert.pem (1123 bytes)
	I1013 21:59:57.906849  428671 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/key.pem (1679 bytes)
	I1013 21:59:57.906973  428671 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/files/etc/ssl/certs/2309292.pem (1708 bytes)
	I1013 21:59:57.907034  428671 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21724-226873/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1013 21:59:57.907057  428671 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/230929.pem -> /usr/share/ca-certificates/230929.pem
	I1013 21:59:57.907076  428671 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21724-226873/.minikube/files/etc/ssl/certs/2309292.pem -> /usr/share/ca-certificates/2309292.pem
	I1013 21:59:57.907618  428671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1013 21:59:57.934740  428671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1013 21:59:57.961892  428671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1013 21:59:57.986915  428671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1013 21:59:58.009124  428671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/force-systemd-flag-886102/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1013 21:59:58.028520  428671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/force-systemd-flag-886102/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1013 21:59:58.047776  428671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/force-systemd-flag-886102/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1013 21:59:58.066828  428671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/force-systemd-flag-886102/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1013 21:59:58.086092  428671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1013 21:59:58.107049  428671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/certs/230929.pem --> /usr/share/ca-certificates/230929.pem (1338 bytes)
	I1013 21:59:58.127536  428671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/files/etc/ssl/certs/2309292.pem --> /usr/share/ca-certificates/2309292.pem (1708 bytes)
	I1013 21:59:58.147757  428671 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1013 21:59:58.162543  428671 ssh_runner.go:195] Run: openssl version
	I1013 21:59:58.169090  428671 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1013 21:59:58.178803  428671 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1013 21:59:58.182938  428671 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 13 21:18 /usr/share/ca-certificates/minikubeCA.pem
	I1013 21:59:58.183023  428671 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1013 21:59:58.232501  428671 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1013 21:59:58.243505  428671 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/230929.pem && ln -fs /usr/share/ca-certificates/230929.pem /etc/ssl/certs/230929.pem"
	I1013 21:59:58.253425  428671 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/230929.pem
	I1013 21:59:58.257692  428671 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 13 21:24 /usr/share/ca-certificates/230929.pem
	I1013 21:59:58.257758  428671 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/230929.pem
	I1013 21:59:58.297118  428671 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/230929.pem /etc/ssl/certs/51391683.0"
	I1013 21:59:58.306772  428671 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2309292.pem && ln -fs /usr/share/ca-certificates/2309292.pem /etc/ssl/certs/2309292.pem"
	I1013 21:59:58.316122  428671 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2309292.pem
	I1013 21:59:58.321300  428671 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 13 21:24 /usr/share/ca-certificates/2309292.pem
	I1013 21:59:58.321365  428671 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2309292.pem
	I1013 21:59:58.358723  428671 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2309292.pem /etc/ssl/certs/3ec20f2e.0"
	I1013 21:59:58.368338  428671 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1013 21:59:58.372169  428671 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1013 21:59:58.372230  428671 kubeadm.go:400] StartCluster: {Name:force-systemd-flag-886102 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-886102 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 21:59:58.372291  428671 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 21:59:58.372354  428671 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 21:59:58.405195  428671 cri.go:89] found id: ""
	I1013 21:59:58.405267  428671 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1013 21:59:58.414133  428671 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1013 21:59:58.422277  428671 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1013 21:59:58.422332  428671 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1013 21:59:58.431052  428671 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1013 21:59:58.431073  428671 kubeadm.go:157] found existing configuration files:
	
	I1013 21:59:58.431124  428671 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1013 21:59:58.439714  428671 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1013 21:59:58.439785  428671 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1013 21:59:58.447909  428671 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1013 21:59:58.456129  428671 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1013 21:59:58.456213  428671 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1013 21:59:58.464454  428671 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1013 21:59:58.472763  428671 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1013 21:59:58.472834  428671 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1013 21:59:58.481309  428671 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1013 21:59:58.489297  428671 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1013 21:59:58.489350  428671 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1013 21:59:58.497119  428671 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1013 21:59:58.561744  428671 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1013 21:59:58.645318  428671 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1013 22:00:02.988063  431667 pod_ready.go:94] pod "kube-proxy-szdxg" is "Ready"
	I1013 22:00:02.988090  431667 pod_ready.go:86] duration metric: took 399.816507ms for pod "kube-proxy-szdxg" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:00:03.188274  431667 pod_ready.go:83] waiting for pod "kube-scheduler-pause-253311" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:00:03.589239  431667 pod_ready.go:94] pod "kube-scheduler-pause-253311" is "Ready"
	I1013 22:00:03.589271  431667 pod_ready.go:86] duration metric: took 400.96385ms for pod "kube-scheduler-pause-253311" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:00:03.589287  431667 pod_ready.go:40] duration metric: took 1.605112654s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 22:00:03.652833  431667 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1013 22:00:03.655235  431667 out.go:179] * Done! kubectl is now configured to use "pause-253311" cluster and "default" namespace by default
	I1013 22:00:05.829140  410447 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1013 22:00:05.829230  410447 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1013 22:00:05.829331  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1013 22:00:05.862067  410447 cri.go:89] found id: "7d05cfad3344d068d56b937fab95c0cd0c49de0523366c64007456d3d535d996"
	I1013 22:00:05.862093  410447 cri.go:89] found id: "17016d7851618fa0d4027de50ae4987968e9ea6e570b0bdb1c29697f1e7b476c"
	I1013 22:00:05.862099  410447 cri.go:89] found id: ""
	I1013 22:00:05.862110  410447 logs.go:282] 2 containers: [7d05cfad3344d068d56b937fab95c0cd0c49de0523366c64007456d3d535d996 17016d7851618fa0d4027de50ae4987968e9ea6e570b0bdb1c29697f1e7b476c]
	I1013 22:00:05.862173  410447 ssh_runner.go:195] Run: which crictl
	I1013 22:00:05.867490  410447 ssh_runner.go:195] Run: which crictl
	I1013 22:00:05.872040  410447 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1013 22:00:05.872114  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1013 22:00:05.904364  410447 cri.go:89] found id: ""
	I1013 22:00:05.904394  410447 logs.go:282] 0 containers: []
	W1013 22:00:05.904404  410447 logs.go:284] No container was found matching "etcd"
	I1013 22:00:05.904412  410447 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1013 22:00:05.904487  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1013 22:00:05.937481  410447 cri.go:89] found id: ""
	I1013 22:00:05.937513  410447 logs.go:282] 0 containers: []
	W1013 22:00:05.937555  410447 logs.go:284] No container was found matching "coredns"
	I1013 22:00:05.937565  410447 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1013 22:00:05.937624  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1013 22:00:05.972527  410447 cri.go:89] found id: "d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110"
	I1013 22:00:05.972553  410447 cri.go:89] found id: ""
	I1013 22:00:05.972563  410447 logs.go:282] 1 containers: [d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110]
	I1013 22:00:05.972623  410447 ssh_runner.go:195] Run: which crictl
	I1013 22:00:05.977603  410447 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1013 22:00:05.977740  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1013 22:00:06.015432  410447 cri.go:89] found id: ""
	I1013 22:00:06.015464  410447 logs.go:282] 0 containers: []
	W1013 22:00:06.015473  410447 logs.go:284] No container was found matching "kube-proxy"
	I1013 22:00:06.015479  410447 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1013 22:00:06.015546  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1013 22:00:06.051102  410447 cri.go:89] found id: "e977b2f297ecb2613f43d4990ec61bcabc490b9bd0c61ff71bbffd9249b63c27"
	I1013 22:00:06.051130  410447 cri.go:89] found id: ""
	I1013 22:00:06.051140  410447 logs.go:282] 1 containers: [e977b2f297ecb2613f43d4990ec61bcabc490b9bd0c61ff71bbffd9249b63c27]
	I1013 22:00:06.051200  410447 ssh_runner.go:195] Run: which crictl
	I1013 22:00:06.056503  410447 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1013 22:00:06.056586  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1013 22:00:06.095546  410447 cri.go:89] found id: ""
	I1013 22:00:06.095574  410447 logs.go:282] 0 containers: []
	W1013 22:00:06.095584  410447 logs.go:284] No container was found matching "kindnet"
	I1013 22:00:06.095591  410447 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1013 22:00:06.095650  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1013 22:00:06.128407  410447 cri.go:89] found id: ""
	I1013 22:00:06.128439  410447 logs.go:282] 0 containers: []
	W1013 22:00:06.128451  410447 logs.go:284] No container was found matching "storage-provisioner"
	I1013 22:00:06.128469  410447 logs.go:123] Gathering logs for kube-scheduler [d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110] ...
	I1013 22:00:06.128486  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110"
	I1013 22:00:06.184944  410447 logs.go:123] Gathering logs for container status ...
	I1013 22:00:06.184980  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1013 22:00:06.223800  410447 logs.go:123] Gathering logs for kubelet ...
	I1013 22:00:06.223832  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1013 22:00:06.300933  410447 logs.go:123] Gathering logs for dmesg ...
	I1013 22:00:06.300970  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1013 22:00:06.316949  410447 logs.go:123] Gathering logs for kube-apiserver [7d05cfad3344d068d56b937fab95c0cd0c49de0523366c64007456d3d535d996] ...
	I1013 22:00:06.316980  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7d05cfad3344d068d56b937fab95c0cd0c49de0523366c64007456d3d535d996"
	I1013 22:00:06.355330  410447 logs.go:123] Gathering logs for kube-apiserver [17016d7851618fa0d4027de50ae4987968e9ea6e570b0bdb1c29697f1e7b476c] ...
	I1013 22:00:06.355361  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 17016d7851618fa0d4027de50ae4987968e9ea6e570b0bdb1c29697f1e7b476c"
	I1013 22:00:06.389354  410447 logs.go:123] Gathering logs for kube-controller-manager [e977b2f297ecb2613f43d4990ec61bcabc490b9bd0c61ff71bbffd9249b63c27] ...
	I1013 22:00:06.389383  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e977b2f297ecb2613f43d4990ec61bcabc490b9bd0c61ff71bbffd9249b63c27"
	I1013 22:00:06.418636  410447 logs.go:123] Gathering logs for CRI-O ...
	I1013 22:00:06.418662  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1013 22:00:06.470954  410447 logs.go:123] Gathering logs for describe nodes ...
	I1013 22:00:06.471008  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1013 22:00:07.815861  428671 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1013 22:00:07.815945  428671 kubeadm.go:318] [preflight] Running pre-flight checks
	I1013 22:00:07.816094  428671 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1013 22:00:07.816178  428671 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1013 22:00:07.816231  428671 kubeadm.go:318] OS: Linux
	I1013 22:00:07.816298  428671 kubeadm.go:318] CGROUPS_CPU: enabled
	I1013 22:00:07.816354  428671 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1013 22:00:07.816415  428671 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1013 22:00:07.816470  428671 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1013 22:00:07.816541  428671 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1013 22:00:07.816595  428671 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1013 22:00:07.816663  428671 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1013 22:00:07.816716  428671 kubeadm.go:318] CGROUPS_IO: enabled
	I1013 22:00:07.816838  428671 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1013 22:00:07.816977  428671 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1013 22:00:07.817127  428671 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1013 22:00:07.817221  428671 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1013 22:00:07.819558  428671 out.go:252]   - Generating certificates and keys ...
	I1013 22:00:07.819645  428671 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1013 22:00:07.819725  428671 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1013 22:00:07.819807  428671 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1013 22:00:07.819908  428671 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1013 22:00:07.820025  428671 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1013 22:00:07.820101  428671 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1013 22:00:07.820180  428671 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1013 22:00:07.820381  428671 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-886102 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1013 22:00:07.820477  428671 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1013 22:00:07.820668  428671 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-886102 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1013 22:00:07.820775  428671 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1013 22:00:07.820864  428671 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1013 22:00:07.820922  428671 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1013 22:00:07.821008  428671 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1013 22:00:07.821076  428671 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1013 22:00:07.821150  428671 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1013 22:00:07.821235  428671 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1013 22:00:07.821317  428671 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1013 22:00:07.821380  428671 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1013 22:00:07.821478  428671 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1013 22:00:07.821578  428671 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1013 22:00:07.822776  428671 out.go:252]   - Booting up control plane ...
	I1013 22:00:07.822853  428671 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1013 22:00:07.822919  428671 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1013 22:00:07.822976  428671 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1013 22:00:07.823111  428671 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1013 22:00:07.823237  428671 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1013 22:00:07.823398  428671 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1013 22:00:07.823523  428671 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1013 22:00:07.823584  428671 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1013 22:00:07.823723  428671 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1013 22:00:07.823837  428671 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1013 22:00:07.823912  428671 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.000923327s
	I1013 22:00:07.824035  428671 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1013 22:00:07.824125  428671 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1013 22:00:07.824240  428671 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1013 22:00:07.824355  428671 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1013 22:00:07.824456  428671 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.772431182s
	I1013 22:00:07.824549  428671 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 1.780497715s
	I1013 22:00:07.824651  428671 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 3.50225892s
	I1013 22:00:07.824790  428671 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1013 22:00:07.824975  428671 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1013 22:00:07.825081  428671 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1013 22:00:07.825389  428671 kubeadm.go:318] [mark-control-plane] Marking the node force-systemd-flag-886102 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1013 22:00:07.825467  428671 kubeadm.go:318] [bootstrap-token] Using token: 7acba4.jyek80jbizo82dvc
	I1013 22:00:07.827432  428671 out.go:252]   - Configuring RBAC rules ...
	I1013 22:00:07.827582  428671 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1013 22:00:07.827694  428671 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1013 22:00:07.827894  428671 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1013 22:00:07.828101  428671 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1013 22:00:07.828270  428671 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1013 22:00:07.828394  428671 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1013 22:00:07.828551  428671 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1013 22:00:07.828608  428671 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1013 22:00:07.828671  428671 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1013 22:00:07.828679  428671 kubeadm.go:318] 
	I1013 22:00:07.828759  428671 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1013 22:00:07.828769  428671 kubeadm.go:318] 
	I1013 22:00:07.828886  428671 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1013 22:00:07.828895  428671 kubeadm.go:318] 
	I1013 22:00:07.828926  428671 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1013 22:00:07.829044  428671 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1013 22:00:07.829123  428671 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1013 22:00:07.829132  428671 kubeadm.go:318] 
	I1013 22:00:07.829200  428671 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1013 22:00:07.829209  428671 kubeadm.go:318] 
	I1013 22:00:07.829279  428671 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1013 22:00:07.829287  428671 kubeadm.go:318] 
	I1013 22:00:07.829362  428671 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1013 22:00:07.829458  428671 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1013 22:00:07.829556  428671 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1013 22:00:07.829566  428671 kubeadm.go:318] 
	I1013 22:00:07.829690  428671 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1013 22:00:07.829796  428671 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1013 22:00:07.829806  428671 kubeadm.go:318] 
	I1013 22:00:07.829917  428671 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 7acba4.jyek80jbizo82dvc \
	I1013 22:00:07.830086  428671 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:00bf5d6d0f4ef7bd9334b23e90d8fd2b7e452995fe6a96d4fd7aebd6540a9956 \
	I1013 22:00:07.830124  428671 kubeadm.go:318] 	--control-plane 
	I1013 22:00:07.830132  428671 kubeadm.go:318] 
	I1013 22:00:07.830227  428671 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1013 22:00:07.830240  428671 kubeadm.go:318] 
	I1013 22:00:07.830357  428671 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 7acba4.jyek80jbizo82dvc \
	I1013 22:00:07.830505  428671 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:00bf5d6d0f4ef7bd9334b23e90d8fd2b7e452995fe6a96d4fd7aebd6540a9956 
	I1013 22:00:07.830522  428671 cni.go:84] Creating CNI manager for ""
	I1013 22:00:07.830531  428671 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 22:00:07.832473  428671 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1013 22:00:07.833552  428671 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1013 22:00:07.837890  428671 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1013 22:00:07.837912  428671 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1013 22:00:07.855752  428671 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1013 22:00:08.100646  428671 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1013 22:00:08.100861  428671 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes force-systemd-flag-886102 minikube.k8s.io/updated_at=2025_10_13T22_00_08_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=18273cc699fc357fbf4f93654efe4966698a9f22 minikube.k8s.io/name=force-systemd-flag-886102 minikube.k8s.io/primary=true
	I1013 22:00:08.100870  428671 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:00:08.114413  428671 ops.go:34] apiserver oom_adj: -16
	I1013 22:00:08.195308  428671 kubeadm.go:1113] duration metric: took 94.515668ms to wait for elevateKubeSystemPrivileges
	I1013 22:00:08.195343  428671 kubeadm.go:402] duration metric: took 9.82311855s to StartCluster
	I1013 22:00:08.195365  428671 settings.go:142] acquiring lock: {Name:mk13008e3b2fce0e368bddbf00d43b8340210d33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:00:08.195436  428671 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-226873/kubeconfig
	I1013 22:00:08.196883  428671 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/kubeconfig: {Name:mk2f336b13d09ff6e6da9e86905651541ce51ca0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:00:08.197182  428671 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1013 22:00:08.197191  428671 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1013 22:00:08.197273  428671 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1013 22:00:08.197394  428671 addons.go:69] Setting storage-provisioner=true in profile "force-systemd-flag-886102"
	I1013 22:00:08.197405  428671 config.go:182] Loaded profile config "force-systemd-flag-886102": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:00:08.197414  428671 addons.go:238] Setting addon storage-provisioner=true in "force-systemd-flag-886102"
	I1013 22:00:08.197406  428671 addons.go:69] Setting default-storageclass=true in profile "force-systemd-flag-886102"
	I1013 22:00:08.197471  428671 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "force-systemd-flag-886102"
	I1013 22:00:08.197453  428671 host.go:66] Checking if "force-systemd-flag-886102" exists ...
	I1013 22:00:08.197881  428671 cli_runner.go:164] Run: docker container inspect force-systemd-flag-886102 --format={{.State.Status}}
	I1013 22:00:08.198034  428671 cli_runner.go:164] Run: docker container inspect force-systemd-flag-886102 --format={{.State.Status}}
	I1013 22:00:08.199831  428671 out.go:179] * Verifying Kubernetes components...
	I1013 22:00:08.201009  428671 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:00:08.221239  428671 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	
	
	==> CRI-O <==
	Oct 13 22:00:00 pause-253311 crio[2189]: time="2025-10-13T22:00:00.591688546Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Oct 13 22:00:00 pause-253311 crio[2189]: time="2025-10-13T22:00:00.592472149Z" level=info msg="Conmon does support the --sync option"
	Oct 13 22:00:00 pause-253311 crio[2189]: time="2025-10-13T22:00:00.592491321Z" level=info msg="Conmon does support the --log-global-size-max option"
	Oct 13 22:00:00 pause-253311 crio[2189]: time="2025-10-13T22:00:00.592505918Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Oct 13 22:00:00 pause-253311 crio[2189]: time="2025-10-13T22:00:00.593199356Z" level=info msg="Conmon does support the --sync option"
	Oct 13 22:00:00 pause-253311 crio[2189]: time="2025-10-13T22:00:00.593215439Z" level=info msg="Conmon does support the --log-global-size-max option"
	Oct 13 22:00:00 pause-253311 crio[2189]: time="2025-10-13T22:00:00.597241464Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 13 22:00:00 pause-253311 crio[2189]: time="2025-10-13T22:00:00.597274357Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 13 22:00:00 pause-253311 crio[2189]: time="2025-10-13T22:00:00.597779302Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = true\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/c
ni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"/
var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Oct 13 22:00:00 pause-253311 crio[2189]: time="2025-10-13T22:00:00.598175607Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Oct 13 22:00:00 pause-253311 crio[2189]: time="2025-10-13T22:00:00.598222174Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Oct 13 22:00:00 pause-253311 crio[2189]: time="2025-10-13T22:00:00.603924731Z" level=info msg="No kernel support for IPv6: could not find nftables binary: exec: \"nft\": executable file not found in $PATH"
	Oct 13 22:00:00 pause-253311 crio[2189]: time="2025-10-13T22:00:00.644125205Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-p7jvh Namespace:kube-system ID:ba3ba13152bbcc9c77c5610bad2235945be542cf249dfe7531ecac3d6d151119 UID:93b118c8-6a99-4f2e-be68-cd05c9c12326 NetNS:/var/run/netns/ec2eaf68-6adc-420e-bc31-7ca2fcfe611a Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0007900e0}] Aliases:map[]}"
	Oct 13 22:00:00 pause-253311 crio[2189]: time="2025-10-13T22:00:00.644312056Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-p7jvh for CNI network kindnet (type=ptp)"
	Oct 13 22:00:00 pause-253311 crio[2189]: time="2025-10-13T22:00:00.644724818Z" level=info msg="Registered SIGHUP reload watcher"
	Oct 13 22:00:00 pause-253311 crio[2189]: time="2025-10-13T22:00:00.644762741Z" level=info msg="Starting seccomp notifier watcher"
	Oct 13 22:00:00 pause-253311 crio[2189]: time="2025-10-13T22:00:00.644820203Z" level=info msg="Create NRI interface"
	Oct 13 22:00:00 pause-253311 crio[2189]: time="2025-10-13T22:00:00.644946147Z" level=info msg="built-in NRI default validator is disabled"
	Oct 13 22:00:00 pause-253311 crio[2189]: time="2025-10-13T22:00:00.644961513Z" level=info msg="runtime interface created"
	Oct 13 22:00:00 pause-253311 crio[2189]: time="2025-10-13T22:00:00.644974925Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Oct 13 22:00:00 pause-253311 crio[2189]: time="2025-10-13T22:00:00.64498412Z" level=info msg="runtime interface starting up..."
	Oct 13 22:00:00 pause-253311 crio[2189]: time="2025-10-13T22:00:00.645005092Z" level=info msg="starting plugins..."
	Oct 13 22:00:00 pause-253311 crio[2189]: time="2025-10-13T22:00:00.645020855Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Oct 13 22:00:00 pause-253311 crio[2189]: time="2025-10-13T22:00:00.645408956Z" level=info msg="No systemd watchdog enabled"
	Oct 13 22:00:00 pause-253311 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	72e4da29ff202       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   13 seconds ago      Running             coredns                   0                   ba3ba13152bbc       coredns-66bc5c9577-p7jvh               kube-system
	2631ab18f5640       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   25 seconds ago      Running             kindnet-cni               0                   7fbb3b6597482       kindnet-2htsm                          kube-system
	3ff5f4fa83e6b       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   25 seconds ago      Running             kube-proxy                0                   31d7bf9a01b58       kube-proxy-szdxg                       kube-system
	9f36be9b9e29a       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   35 seconds ago      Running             kube-scheduler            0                   fdaf225068962       kube-scheduler-pause-253311            kube-system
	72a7985084091       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   35 seconds ago      Running             kube-controller-manager   0                   d80b18870a882       kube-controller-manager-pause-253311   kube-system
	ae87bf0613601       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   35 seconds ago      Running             etcd                      0                   93c629b84d291       etcd-pause-253311                      kube-system
	ffd8882a9dc89       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   35 seconds ago      Running             kube-apiserver            0                   12ec819bc8058       kube-apiserver-pause-253311            kube-system
	
	
	==> coredns [72e4da29ff2020997167660dbf5e577efbcb79f4cf72f7406a5a3d592c3753d8] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:43540 - 58604 "HINFO IN 3834364473335471005.6906010466827287016. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.083871046s
	
	
	==> describe nodes <==
	Name:               pause-253311
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-253311
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=18273cc699fc357fbf4f93654efe4966698a9f22
	                    minikube.k8s.io/name=pause-253311
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_13T21_59_38_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 13 Oct 2025 21:59:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-253311
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 13 Oct 2025 21:59:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 13 Oct 2025 21:59:58 +0000   Mon, 13 Oct 2025 21:59:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 13 Oct 2025 21:59:58 +0000   Mon, 13 Oct 2025 21:59:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 13 Oct 2025 21:59:58 +0000   Mon, 13 Oct 2025 21:59:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 13 Oct 2025 21:59:58 +0000   Mon, 13 Oct 2025 21:59:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    pause-253311
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863444Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863444Ki
	  pods:               110
	System Info:
	  Machine ID:                 66f4c9907daa2bafbf78246f68ed04ba
	  System UUID:                f674c289-81cb-419b-aeb1-181b0f68b580
	  Boot ID:                    981b04c4-08c1-4321-af44-0fa889f5f1d8
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-p7jvh                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     26s
	  kube-system                 etcd-pause-253311                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         32s
	  kube-system                 kindnet-2htsm                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-pause-253311             250m (3%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-pause-253311    200m (2%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-szdxg                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-pause-253311             100m (1%)     0 (0%)      0 (0%)           0 (0%)         32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 24s   kube-proxy       
	  Normal  Starting                 32s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  32s   kubelet          Node pause-253311 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    32s   kubelet          Node pause-253311 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     32s   kubelet          Node pause-253311 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           27s   node-controller  Node pause-253311 event: Registered Node pause-253311 in Controller
	  Normal  NodeReady                15s   kubelet          Node pause-253311 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.099627] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.027042] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.612816] kauditd_printk_skb: 47 callbacks suppressed
	[Oct13 21:21] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +1.026013] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +1.023870] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +1.023931] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +1.023844] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +1.023883] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +2.047734] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +4.031606] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +8.511203] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[ +16.382178] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[Oct13 21:22] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	
	
	==> etcd [ae87bf0613601bc98b12e73093e02e301d4f94c0099edc7e8d9a3dfc637ea701] <==
	{"level":"warn","ts":"2025-10-13T21:59:34.578727Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:59:34.585299Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:59:34.591756Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:59:34.598475Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57318","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:59:34.605470Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:59:34.612842Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:59:34.620425Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:59:34.628688Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:59:34.636541Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:59:34.643225Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:59:34.649785Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:59:34.656715Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:59:34.665717Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:59:34.672395Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:59:34.679132Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:59:34.686152Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:59:34.694175Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:59:34.700806Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:59:34.708415Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:59:34.715170Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57622","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:59:34.725954Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:59:34.732806Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:59:34.740110Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:59:34.789385Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:59:52.105818Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"130.22059ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571765466836980909 > lease_revoke:<id:5b3399df95ff52fb>","response":"size:29"}
	
	
	==> kernel <==
	 22:00:09 up  1:42,  0 user,  load average: 5.21, 2.83, 6.32
	Linux pause-253311 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [2631ab18f564077ad577f6e26dabadd22f99644287d0fc7fd642f0231e2fb504] <==
	I1013 21:59:44.217488       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1013 21:59:44.217735       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1013 21:59:44.217862       1 main.go:148] setting mtu 1500 for CNI 
	I1013 21:59:44.217875       1 main.go:178] kindnetd IP family: "ipv4"
	I1013 21:59:44.217892       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-13T21:59:44Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1013 21:59:44.418518       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1013 21:59:44.511588       1 controller.go:381] "Waiting for informer caches to sync"
	I1013 21:59:44.511963       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1013 21:59:44.512125       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1013 21:59:44.712161       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1013 21:59:44.712261       1 metrics.go:72] Registering metrics
	I1013 21:59:44.712397       1 controller.go:711] "Syncing nftables rules"
	I1013 21:59:54.423116       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1013 21:59:54.423168       1 main.go:301] handling current node
	I1013 22:00:04.422077       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1013 22:00:04.422106       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ffd8882a9dc8994ddf3be9937257ffa6da74907717ea8751def522f9a2473b89] <==
	I1013 21:59:35.323844       1 policy_source.go:240] refreshing policies
	E1013 21:59:35.335517       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1013 21:59:35.383770       1 controller.go:667] quota admission added evaluator for: namespaces
	I1013 21:59:35.388677       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1013 21:59:35.388770       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1013 21:59:35.393210       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1013 21:59:35.393387       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1013 21:59:35.514787       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1013 21:59:36.219004       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1013 21:59:36.222865       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1013 21:59:36.222885       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1013 21:59:36.700443       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1013 21:59:36.739880       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1013 21:59:36.789823       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1013 21:59:36.800124       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1013 21:59:36.801190       1 controller.go:667] quota admission added evaluator for: endpoints
	I1013 21:59:36.805580       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1013 21:59:37.487361       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1013 21:59:37.807244       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1013 21:59:37.816962       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1013 21:59:37.824474       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1013 21:59:43.186130       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1013 21:59:43.191516       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1013 21:59:43.234668       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1013 21:59:43.534114       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [72a7985084091f5ff6e2d6afc41a7b849fe6f6bb0d7e00caad660ce2d8be6fae] <==
	I1013 21:59:42.480583       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1013 21:59:42.480595       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 21:59:42.480606       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1013 21:59:42.480609       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1013 21:59:42.480621       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1013 21:59:42.480778       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1013 21:59:42.481501       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1013 21:59:42.481521       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1013 21:59:42.481538       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1013 21:59:42.481577       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1013 21:59:42.481618       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1013 21:59:42.481737       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1013 21:59:42.482022       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1013 21:59:42.482098       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1013 21:59:42.482125       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1013 21:59:42.482240       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-253311"
	I1013 21:59:42.482288       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1013 21:59:42.483665       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1013 21:59:42.483702       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1013 21:59:42.484618       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1013 21:59:42.486755       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1013 21:59:42.487966       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1013 21:59:42.494049       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1013 21:59:42.503615       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 21:59:57.504339       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [3ff5f4fa83e6bdb9765489cb27d134699de2c145759b39c1ebb50b69637599e4] <==
	I1013 21:59:44.127120       1 server_linux.go:53] "Using iptables proxy"
	I1013 21:59:44.185331       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1013 21:59:44.286079       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1013 21:59:44.286125       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1013 21:59:44.286258       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1013 21:59:44.308683       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1013 21:59:44.308745       1 server_linux.go:132] "Using iptables Proxier"
	I1013 21:59:44.314488       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1013 21:59:44.315114       1 server.go:527] "Version info" version="v1.34.1"
	I1013 21:59:44.315152       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 21:59:44.316843       1 config.go:403] "Starting serviceCIDR config controller"
	I1013 21:59:44.317516       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1013 21:59:44.317244       1 config.go:200] "Starting service config controller"
	I1013 21:59:44.317399       1 config.go:106] "Starting endpoint slice config controller"
	I1013 21:59:44.317562       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1013 21:59:44.317566       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1013 21:59:44.317612       1 config.go:309] "Starting node config controller"
	I1013 21:59:44.317632       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1013 21:59:44.317640       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1013 21:59:44.417735       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1013 21:59:44.417769       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1013 21:59:44.417781       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [9f36be9b9e29aac5bebbf06e4ae167def223bbd61742d5ff1d3ae7b5b075414d] <==
	E1013 21:59:35.251724       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1013 21:59:35.256923       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1013 21:59:35.257146       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1013 21:59:35.257273       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1013 21:59:35.257328       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1013 21:59:35.257404       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1013 21:59:35.257500       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1013 21:59:35.257589       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1013 21:59:35.257637       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1013 21:59:35.257330       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1013 21:59:35.258527       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1013 21:59:35.262758       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1013 21:59:36.110196       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1013 21:59:36.146820       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1013 21:59:36.153051       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1013 21:59:36.210146       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1013 21:59:36.232652       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1013 21:59:36.266116       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1013 21:59:36.318232       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1013 21:59:36.473281       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1013 21:59:36.501376       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1013 21:59:36.504444       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1013 21:59:36.512636       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1013 21:59:36.616513       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1013 21:59:39.235774       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 13 21:59:38 pause-253311 kubelet[1327]: E1013 21:59:38.705041    1327 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-pause-253311\" already exists" pod="kube-system/etcd-pause-253311"
	Oct 13 21:59:38 pause-253311 kubelet[1327]: I1013 21:59:38.735583    1327 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-pause-253311" podStartSLOduration=1.735560158 podStartE2EDuration="1.735560158s" podCreationTimestamp="2025-10-13 21:59:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 21:59:38.722756729 +0000 UTC m=+1.152279316" watchObservedRunningTime="2025-10-13 21:59:38.735560158 +0000 UTC m=+1.165082741"
	Oct 13 21:59:38 pause-253311 kubelet[1327]: I1013 21:59:38.752537    1327 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-pause-253311" podStartSLOduration=1.752482544 podStartE2EDuration="1.752482544s" podCreationTimestamp="2025-10-13 21:59:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 21:59:38.75071072 +0000 UTC m=+1.180233308" watchObservedRunningTime="2025-10-13 21:59:38.752482544 +0000 UTC m=+1.182005132"
	Oct 13 21:59:38 pause-253311 kubelet[1327]: I1013 21:59:38.752683    1327 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-pause-253311" podStartSLOduration=1.7526730160000001 podStartE2EDuration="1.752673016s" podCreationTimestamp="2025-10-13 21:59:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 21:59:38.735834329 +0000 UTC m=+1.165356913" watchObservedRunningTime="2025-10-13 21:59:38.752673016 +0000 UTC m=+1.182195603"
	Oct 13 21:59:38 pause-253311 kubelet[1327]: I1013 21:59:38.781227    1327 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-pause-253311" podStartSLOduration=1.781205326 podStartE2EDuration="1.781205326s" podCreationTimestamp="2025-10-13 21:59:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 21:59:38.768334776 +0000 UTC m=+1.197857367" watchObservedRunningTime="2025-10-13 21:59:38.781205326 +0000 UTC m=+1.210727915"
	Oct 13 21:59:42 pause-253311 kubelet[1327]: I1013 21:59:42.507616    1327 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 13 21:59:42 pause-253311 kubelet[1327]: I1013 21:59:42.508410    1327 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 13 21:59:43 pause-253311 kubelet[1327]: I1013 21:59:43.595618    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/b31b98f6-f12a-473d-9d04-be38b7c1ee1c-cni-cfg\") pod \"kindnet-2htsm\" (UID: \"b31b98f6-f12a-473d-9d04-be38b7c1ee1c\") " pod="kube-system/kindnet-2htsm"
	Oct 13 21:59:43 pause-253311 kubelet[1327]: I1013 21:59:43.595676    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a882d7c7-03ef-4810-a8d3-4358c1a75e9b-kube-proxy\") pod \"kube-proxy-szdxg\" (UID: \"a882d7c7-03ef-4810-a8d3-4358c1a75e9b\") " pod="kube-system/kube-proxy-szdxg"
	Oct 13 21:59:43 pause-253311 kubelet[1327]: I1013 21:59:43.595694    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a882d7c7-03ef-4810-a8d3-4358c1a75e9b-lib-modules\") pod \"kube-proxy-szdxg\" (UID: \"a882d7c7-03ef-4810-a8d3-4358c1a75e9b\") " pod="kube-system/kube-proxy-szdxg"
	Oct 13 21:59:43 pause-253311 kubelet[1327]: I1013 21:59:43.595710    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgjfr\" (UniqueName: \"kubernetes.io/projected/a882d7c7-03ef-4810-a8d3-4358c1a75e9b-kube-api-access-hgjfr\") pod \"kube-proxy-szdxg\" (UID: \"a882d7c7-03ef-4810-a8d3-4358c1a75e9b\") " pod="kube-system/kube-proxy-szdxg"
	Oct 13 21:59:43 pause-253311 kubelet[1327]: I1013 21:59:43.595727    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b31b98f6-f12a-473d-9d04-be38b7c1ee1c-xtables-lock\") pod \"kindnet-2htsm\" (UID: \"b31b98f6-f12a-473d-9d04-be38b7c1ee1c\") " pod="kube-system/kindnet-2htsm"
	Oct 13 21:59:43 pause-253311 kubelet[1327]: I1013 21:59:43.595741    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwn2w\" (UniqueName: \"kubernetes.io/projected/b31b98f6-f12a-473d-9d04-be38b7c1ee1c-kube-api-access-qwn2w\") pod \"kindnet-2htsm\" (UID: \"b31b98f6-f12a-473d-9d04-be38b7c1ee1c\") " pod="kube-system/kindnet-2htsm"
	Oct 13 21:59:43 pause-253311 kubelet[1327]: I1013 21:59:43.595791    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a882d7c7-03ef-4810-a8d3-4358c1a75e9b-xtables-lock\") pod \"kube-proxy-szdxg\" (UID: \"a882d7c7-03ef-4810-a8d3-4358c1a75e9b\") " pod="kube-system/kube-proxy-szdxg"
	Oct 13 21:59:43 pause-253311 kubelet[1327]: I1013 21:59:43.595843    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b31b98f6-f12a-473d-9d04-be38b7c1ee1c-lib-modules\") pod \"kindnet-2htsm\" (UID: \"b31b98f6-f12a-473d-9d04-be38b7c1ee1c\") " pod="kube-system/kindnet-2htsm"
	Oct 13 21:59:44 pause-253311 kubelet[1327]: I1013 21:59:44.716266    1327 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-szdxg" podStartSLOduration=1.716247452 podStartE2EDuration="1.716247452s" podCreationTimestamp="2025-10-13 21:59:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 21:59:44.716119866 +0000 UTC m=+7.145642454" watchObservedRunningTime="2025-10-13 21:59:44.716247452 +0000 UTC m=+7.145770040"
	Oct 13 21:59:44 pause-253311 kubelet[1327]: I1013 21:59:44.847423    1327 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-2htsm" podStartSLOduration=1.84739845 podStartE2EDuration="1.84739845s" podCreationTimestamp="2025-10-13 21:59:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 21:59:44.731176764 +0000 UTC m=+7.160699352" watchObservedRunningTime="2025-10-13 21:59:44.84739845 +0000 UTC m=+7.276921043"
	Oct 13 21:59:54 pause-253311 kubelet[1327]: I1013 21:59:54.999289    1327 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 13 21:59:55 pause-253311 kubelet[1327]: I1013 21:59:55.086859    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z798p\" (UniqueName: \"kubernetes.io/projected/93b118c8-6a99-4f2e-be68-cd05c9c12326-kube-api-access-z798p\") pod \"coredns-66bc5c9577-p7jvh\" (UID: \"93b118c8-6a99-4f2e-be68-cd05c9c12326\") " pod="kube-system/coredns-66bc5c9577-p7jvh"
	Oct 13 21:59:55 pause-253311 kubelet[1327]: I1013 21:59:55.087140    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/93b118c8-6a99-4f2e-be68-cd05c9c12326-config-volume\") pod \"coredns-66bc5c9577-p7jvh\" (UID: \"93b118c8-6a99-4f2e-be68-cd05c9c12326\") " pod="kube-system/coredns-66bc5c9577-p7jvh"
	Oct 13 21:59:55 pause-253311 kubelet[1327]: I1013 21:59:55.743135    1327 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-p7jvh" podStartSLOduration=12.743112287 podStartE2EDuration="12.743112287s" podCreationTimestamp="2025-10-13 21:59:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 21:59:55.74308381 +0000 UTC m=+18.172606398" watchObservedRunningTime="2025-10-13 21:59:55.743112287 +0000 UTC m=+18.172634875"
	Oct 13 22:00:04 pause-253311 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 13 22:00:04 pause-253311 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 13 22:00:04 pause-253311 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 13 22:00:04 pause-253311 systemd[1]: kubelet.service: Consumed 1.263s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-253311 -n pause-253311
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-253311 -n pause-253311: exit status 2 (340.64776ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-253311 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (6.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.59s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-534822 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-534822 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (297.997894ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:01:48Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-534822 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-534822 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-534822 describe deploy/metrics-server -n kube-system: exit status 1 (92.449159ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-534822 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-534822
helpers_test.go:243: (dbg) docker inspect old-k8s-version-534822:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "cebe2b59b715e11d4a1d157870b568e5326119ff8587e8bafeeae046fb0d5ef4",
	        "Created": "2025-10-13T22:00:56.40821218Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 452436,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-13T22:00:56.447339232Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:496f866fde942b6f85dc3d23428f27ed11a5cd69b522133c43b8dc97e7575c9e",
	        "ResolvConfPath": "/var/lib/docker/containers/cebe2b59b715e11d4a1d157870b568e5326119ff8587e8bafeeae046fb0d5ef4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/cebe2b59b715e11d4a1d157870b568e5326119ff8587e8bafeeae046fb0d5ef4/hostname",
	        "HostsPath": "/var/lib/docker/containers/cebe2b59b715e11d4a1d157870b568e5326119ff8587e8bafeeae046fb0d5ef4/hosts",
	        "LogPath": "/var/lib/docker/containers/cebe2b59b715e11d4a1d157870b568e5326119ff8587e8bafeeae046fb0d5ef4/cebe2b59b715e11d4a1d157870b568e5326119ff8587e8bafeeae046fb0d5ef4-json.log",
	        "Name": "/old-k8s-version-534822",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-534822:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-534822",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "cebe2b59b715e11d4a1d157870b568e5326119ff8587e8bafeeae046fb0d5ef4",
	                "LowerDir": "/var/lib/docker/overlay2/a3eced189884b262317386087129a706fd41bab22a49fa1875ac763be6612488-init/diff:/var/lib/docker/overlay2/d6236b573fee274727d414fe7dfb0718c2f1a4b8ebed995b4196b3231a8d31a4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a3eced189884b262317386087129a706fd41bab22a49fa1875ac763be6612488/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a3eced189884b262317386087129a706fd41bab22a49fa1875ac763be6612488/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a3eced189884b262317386087129a706fd41bab22a49fa1875ac763be6612488/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-534822",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-534822/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-534822",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-534822",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-534822",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "880cead20e24431467a45f2990674d4f60eba982221769fdf3a9bee063f29604",
	            "SandboxKey": "/var/run/docker/netns/880cead20e24",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33053"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33054"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33057"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33055"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33056"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-534822": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "6e:93:ab:d5:31:ab",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4d1498e7b1a230857c86022c34281ff31ff5a8fd51b2621fd4063f6a1e47ae63",
	                    "EndpointID": "8c80eda0be845b8ac30394104433691ac0f527d8ae516584d375dcb41a115bf8",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-534822",
	                        "cebe2b59b715"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-534822 -n old-k8s-version-534822
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-534822 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-534822 logs -n 25: (1.268244302s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────────
───┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────────
───┤
	│ ssh     │ -p cilium-200102 sudo cat /etc/kubernetes/kubelet.conf                                                                                                                                                                                        │ cilium-200102            │ jenkins │ v1.37.0 │ 13 Oct 25 22:00 UTC │                     │
	│ ssh     │ -p cilium-200102 sudo cat /var/lib/kubelet/config.yaml                                                                                                                                                                                        │ cilium-200102            │ jenkins │ v1.37.0 │ 13 Oct 25 22:00 UTC │                     │
	│ ssh     │ -p cilium-200102 sudo systemctl status docker --all --full --no-pager                                                                                                                                                                         │ cilium-200102            │ jenkins │ v1.37.0 │ 13 Oct 25 22:00 UTC │                     │
	│ ssh     │ -p cilium-200102 sudo systemctl cat docker --no-pager                                                                                                                                                                                         │ cilium-200102            │ jenkins │ v1.37.0 │ 13 Oct 25 22:00 UTC │                     │
	│ ssh     │ -p cilium-200102 sudo cat /etc/docker/daemon.json                                                                                                                                                                                             │ cilium-200102            │ jenkins │ v1.37.0 │ 13 Oct 25 22:00 UTC │                     │
	│ ssh     │ -p cilium-200102 sudo docker system info                                                                                                                                                                                                      │ cilium-200102            │ jenkins │ v1.37.0 │ 13 Oct 25 22:00 UTC │                     │
	│ ssh     │ -p cilium-200102 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                     │ cilium-200102            │ jenkins │ v1.37.0 │ 13 Oct 25 22:00 UTC │                     │
	│ ssh     │ -p cilium-200102 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                     │ cilium-200102            │ jenkins │ v1.37.0 │ 13 Oct 25 22:00 UTC │                     │
	│ ssh     │ -p cilium-200102 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ cilium-200102            │ jenkins │ v1.37.0 │ 13 Oct 25 22:00 UTC │                     │
	│ ssh     │ -p cilium-200102 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ cilium-200102            │ jenkins │ v1.37.0 │ 13 Oct 25 22:00 UTC │                     │
	│ ssh     │ -p cilium-200102 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-200102            │ jenkins │ v1.37.0 │ 13 Oct 25 22:00 UTC │                     │
	│ ssh     │ -p cilium-200102 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-200102            │ jenkins │ v1.37.0 │ 13 Oct 25 22:00 UTC │                     │
	│ ssh     │ -p cilium-200102 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-200102            │ jenkins │ v1.37.0 │ 13 Oct 25 22:00 UTC │                     │
	│ ssh     │ -p cilium-200102 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-200102            │ jenkins │ v1.37.0 │ 13 Oct 25 22:00 UTC │                     │
	│ ssh     │ -p cilium-200102 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-200102            │ jenkins │ v1.37.0 │ 13 Oct 25 22:00 UTC │                     │
	│ ssh     │ -p cilium-200102 sudo containerd config dump                                                                                                                                                                                                  │ cilium-200102            │ jenkins │ v1.37.0 │ 13 Oct 25 22:00 UTC │                     │
	│ ssh     │ -p cilium-200102 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-200102            │ jenkins │ v1.37.0 │ 13 Oct 25 22:00 UTC │                     │
	│ ssh     │ -p cilium-200102 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-200102            │ jenkins │ v1.37.0 │ 13 Oct 25 22:00 UTC │                     │
	│ ssh     │ -p cilium-200102 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-200102            │ jenkins │ v1.37.0 │ 13 Oct 25 22:00 UTC │                     │
	│ ssh     │ -p cilium-200102 sudo crio config                                                                                                                                                                                                             │ cilium-200102            │ jenkins │ v1.37.0 │ 13 Oct 25 22:00 UTC │                     │
	│ delete  │ -p cilium-200102                                                                                                                                                                                                                              │ cilium-200102            │ jenkins │ v1.37.0 │ 13 Oct 25 22:00 UTC │ 13 Oct 25 22:00 UTC │
	│ start   │ -p old-k8s-version-534822 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-534822   │ jenkins │ v1.37.0 │ 13 Oct 25 22:00 UTC │ 13 Oct 25 22:01 UTC │
	│ delete  │ -p force-systemd-env-010902                                                                                                                                                                                                                   │ force-systemd-env-010902 │ jenkins │ v1.37.0 │ 13 Oct 25 22:01 UTC │ 13 Oct 25 22:01 UTC │
	│ start   │ -p no-preload-080337 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-080337        │ jenkins │ v1.37.0 │ 13 Oct 25 22:01 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-534822 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-534822   │ jenkins │ v1.37.0 │ 13 Oct 25 22:01 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────────
───┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 22:01:12
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 22:01:12.512741  455618 out.go:360] Setting OutFile to fd 1 ...
	I1013 22:01:12.513062  455618 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:01:12.513075  455618 out.go:374] Setting ErrFile to fd 2...
	I1013 22:01:12.513082  455618 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:01:12.513301  455618 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-226873/.minikube/bin
	I1013 22:01:12.513814  455618 out.go:368] Setting JSON to false
	I1013 22:01:12.515160  455618 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6221,"bootTime":1760386652,"procs":450,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1013 22:01:12.515270  455618 start.go:141] virtualization: kvm guest
	I1013 22:01:12.518299  455618 out.go:179] * [no-preload-080337] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1013 22:01:12.520304  455618 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 22:01:12.520301  455618 notify.go:220] Checking for updates...
	I1013 22:01:12.521935  455618 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 22:01:12.523375  455618 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-226873/kubeconfig
	I1013 22:01:12.524719  455618 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-226873/.minikube
	I1013 22:01:12.526494  455618 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1013 22:01:12.527598  455618 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 22:01:12.529661  455618 config.go:182] Loaded profile config "cert-expiration-894101": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:01:12.529829  455618 config.go:182] Loaded profile config "kubernetes-upgrade-050146": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:01:12.529927  455618 config.go:182] Loaded profile config "old-k8s-version-534822": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1013 22:01:12.530070  455618 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 22:01:12.555032  455618 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1013 22:01:12.555170  455618 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 22:01:12.617625  455618 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-13 22:01:12.607157553 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652166656 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1013 22:01:12.617773  455618 docker.go:318] overlay module found
	I1013 22:01:12.619864  455618 out.go:179] * Using the docker driver based on user configuration
	I1013 22:01:12.621192  455618 start.go:305] selected driver: docker
	I1013 22:01:12.621211  455618 start.go:925] validating driver "docker" against <nil>
	I1013 22:01:12.621224  455618 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 22:01:12.621816  455618 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 22:01:12.682416  455618 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-13 22:01:12.671956786 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652166656 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1013 22:01:12.682583  455618 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1013 22:01:12.682842  455618 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 22:01:12.684680  455618 out.go:179] * Using Docker driver with root privileges
	I1013 22:01:12.686221  455618 cni.go:84] Creating CNI manager for ""
	I1013 22:01:12.686303  455618 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 22:01:12.686320  455618 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1013 22:01:12.686398  455618 start.go:349] cluster config:
	{Name:no-preload-080337 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-080337 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1013 22:01:12.688270  455618 out.go:179] * Starting "no-preload-080337" primary control-plane node in "no-preload-080337" cluster
	I1013 22:01:12.689638  455618 cache.go:123] Beginning downloading kic base image for docker with crio
	I1013 22:01:12.690932  455618 out.go:179] * Pulling base image v0.0.48-1760363564-21724 ...
	I1013 22:01:12.692588  455618 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 22:01:12.692664  455618 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon
	I1013 22:01:12.692766  455618 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/no-preload-080337/config.json ...
	I1013 22:01:12.692814  455618 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/no-preload-080337/config.json: {Name:mkc790495584793f9520a44174a393159b3de53f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:01:12.692897  455618 cache.go:107] acquiring lock: {Name:mk22a9364551c6b5c8c880eceb2cdd611b51da2f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 22:01:12.692951  455618 cache.go:107] acquiring lock: {Name:mk6044d54e95581671b8d12eb16ba7154be9e4ae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 22:01:12.693003  455618 cache.go:107] acquiring lock: {Name:mk6931ad5aa94faa6a047c26bd9f08eca07726d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 22:01:12.693036  455618 cache.go:107] acquiring lock: {Name:mkf399189dc414297ba076f45e34ea1ae863ef3d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 22:01:12.693017  455618 cache.go:115] /home/jenkins/minikube-integration/21724-226873/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1013 22:01:12.693089  455618 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21724-226873/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 209.005µs
	I1013 22:01:12.693074  455618 cache.go:107] acquiring lock: {Name:mk6019ad9dabd5e086757fd62cea931cca589008 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 22:01:12.693113  455618 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21724-226873/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1013 22:01:12.693047  455618 cache.go:107] acquiring lock: {Name:mk4a3ac78b285b903bf7de76f6d114f2486eff4b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 22:01:12.693072  455618 cache.go:107] acquiring lock: {Name:mk6d91f6f2b8cc9ae34afd3116b942c4c3dc11bd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 22:01:12.693153  455618 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1013 22:01:12.693153  455618 cache.go:107] acquiring lock: {Name:mk9154227203ad745e43a6293d5e771c17558feb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 22:01:12.693222  455618 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1013 22:01:12.693252  455618 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1013 22:01:12.693293  455618 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1013 22:01:12.693316  455618 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1013 22:01:12.693328  455618 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1013 22:01:12.693370  455618 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1013 22:01:12.694681  455618 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1013 22:01:12.694682  455618 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1013 22:01:12.694699  455618 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1013 22:01:12.694699  455618 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1013 22:01:12.694722  455618 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1013 22:01:12.694682  455618 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1013 22:01:12.694827  455618 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1013 22:01:12.716138  455618 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon, skipping pull
	I1013 22:01:12.716164  455618 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 exists in daemon, skipping load
	I1013 22:01:12.716183  455618 cache.go:232] Successfully downloaded all kic artifacts
	I1013 22:01:12.716208  455618 start.go:360] acquireMachinesLock for no-preload-080337: {Name:mk2bf55649fb50a9c6baaf8b730c64cf9325030f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 22:01:12.716305  455618 start.go:364] duration metric: took 79.538µs to acquireMachinesLock for "no-preload-080337"
	I1013 22:01:12.716329  455618 start.go:93] Provisioning new machine with config: &{Name:no-preload-080337 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-080337 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1013 22:01:12.716408  455618 start.go:125] createHost starting for "" (driver="docker")
	I1013 22:01:11.461374  451152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:01:11.962249  451152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:01:12.461853  451152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:01:12.961819  451152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:01:13.462219  451152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:01:13.961878  451152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:01:14.462216  451152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:01:14.962259  451152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:01:15.461897  451152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:01:15.961641  451152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:01:13.387102  410447 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": read tcp 192.168.76.1:59196->192.168.76.2:8443: read: connection reset by peer
	I1013 22:01:13.387172  410447 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1013 22:01:13.387585  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1013 22:01:13.422233  410447 cri.go:89] found id: "2bca0d4b76f96c86a214254a1d7f4cc89b2ae472dc6171bf4ead501afcd93b4a"
	I1013 22:01:13.422252  410447 cri.go:89] found id: "7d05cfad3344d068d56b937fab95c0cd0c49de0523366c64007456d3d535d996"
	I1013 22:01:13.422256  410447 cri.go:89] found id: ""
	I1013 22:01:13.422264  410447 logs.go:282] 2 containers: [2bca0d4b76f96c86a214254a1d7f4cc89b2ae472dc6171bf4ead501afcd93b4a 7d05cfad3344d068d56b937fab95c0cd0c49de0523366c64007456d3d535d996]
	I1013 22:01:13.422331  410447 ssh_runner.go:195] Run: which crictl
	I1013 22:01:13.426943  410447 ssh_runner.go:195] Run: which crictl
	I1013 22:01:13.431569  410447 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1013 22:01:13.431636  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1013 22:01:13.462362  410447 cri.go:89] found id: ""
	I1013 22:01:13.462392  410447 logs.go:282] 0 containers: []
	W1013 22:01:13.462402  410447 logs.go:284] No container was found matching "etcd"
	I1013 22:01:13.462410  410447 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1013 22:01:13.462471  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1013 22:01:13.496840  410447 cri.go:89] found id: ""
	I1013 22:01:13.496870  410447 logs.go:282] 0 containers: []
	W1013 22:01:13.496881  410447 logs.go:284] No container was found matching "coredns"
	I1013 22:01:13.496888  410447 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1013 22:01:13.496963  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1013 22:01:13.529966  410447 cri.go:89] found id: "d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110"
	I1013 22:01:13.529986  410447 cri.go:89] found id: ""
	I1013 22:01:13.530007  410447 logs.go:282] 1 containers: [d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110]
	I1013 22:01:13.530237  410447 ssh_runner.go:195] Run: which crictl
	I1013 22:01:13.535201  410447 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1013 22:01:13.535270  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1013 22:01:13.571284  410447 cri.go:89] found id: ""
	I1013 22:01:13.571319  410447 logs.go:282] 0 containers: []
	W1013 22:01:13.571343  410447 logs.go:284] No container was found matching "kube-proxy"
	I1013 22:01:13.571352  410447 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1013 22:01:13.571406  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1013 22:01:13.603352  410447 cri.go:89] found id: "6bb08d6a13040e205988dc98d566c74f87d564ee6fc94f5c64022735afe6609d"
	I1013 22:01:13.603384  410447 cri.go:89] found id: "f6f55e13dca4b11540a90d88bc6a234cd4492de8bf23fd086660a9f2109d5878"
	I1013 22:01:13.603391  410447 cri.go:89] found id: ""
	I1013 22:01:13.603401  410447 logs.go:282] 2 containers: [6bb08d6a13040e205988dc98d566c74f87d564ee6fc94f5c64022735afe6609d f6f55e13dca4b11540a90d88bc6a234cd4492de8bf23fd086660a9f2109d5878]
	I1013 22:01:13.603464  410447 ssh_runner.go:195] Run: which crictl
	I1013 22:01:13.607977  410447 ssh_runner.go:195] Run: which crictl
	I1013 22:01:13.612445  410447 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1013 22:01:13.612516  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1013 22:01:13.645138  410447 cri.go:89] found id: ""
	I1013 22:01:13.645161  410447 logs.go:282] 0 containers: []
	W1013 22:01:13.645169  410447 logs.go:284] No container was found matching "kindnet"
	I1013 22:01:13.645176  410447 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1013 22:01:13.645224  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1013 22:01:13.678858  410447 cri.go:89] found id: ""
	I1013 22:01:13.678888  410447 logs.go:282] 0 containers: []
	W1013 22:01:13.678896  410447 logs.go:284] No container was found matching "storage-provisioner"
	I1013 22:01:13.678910  410447 logs.go:123] Gathering logs for kubelet ...
	I1013 22:01:13.678921  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1013 22:01:13.768910  410447 logs.go:123] Gathering logs for dmesg ...
	I1013 22:01:13.768947  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1013 22:01:13.786389  410447 logs.go:123] Gathering logs for describe nodes ...
	I1013 22:01:13.786418  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1013 22:01:13.853564  410447 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1013 22:01:13.853589  410447 logs.go:123] Gathering logs for kube-apiserver [2bca0d4b76f96c86a214254a1d7f4cc89b2ae472dc6171bf4ead501afcd93b4a] ...
	I1013 22:01:13.853607  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2bca0d4b76f96c86a214254a1d7f4cc89b2ae472dc6171bf4ead501afcd93b4a"
	I1013 22:01:13.893598  410447 logs.go:123] Gathering logs for CRI-O ...
	I1013 22:01:13.893633  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1013 22:01:13.967302  410447 logs.go:123] Gathering logs for container status ...
	I1013 22:01:13.967338  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1013 22:01:14.010317  410447 logs.go:123] Gathering logs for kube-apiserver [7d05cfad3344d068d56b937fab95c0cd0c49de0523366c64007456d3d535d996] ...
	I1013 22:01:14.010351  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7d05cfad3344d068d56b937fab95c0cd0c49de0523366c64007456d3d535d996"
	I1013 22:01:14.073735  410447 logs.go:123] Gathering logs for kube-scheduler [d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110] ...
	I1013 22:01:14.073781  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110"
	I1013 22:01:14.142525  410447 logs.go:123] Gathering logs for kube-controller-manager [6bb08d6a13040e205988dc98d566c74f87d564ee6fc94f5c64022735afe6609d] ...
	I1013 22:01:14.142575  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6bb08d6a13040e205988dc98d566c74f87d564ee6fc94f5c64022735afe6609d"
	I1013 22:01:14.186234  410447 logs.go:123] Gathering logs for kube-controller-manager [f6f55e13dca4b11540a90d88bc6a234cd4492de8bf23fd086660a9f2109d5878] ...
	I1013 22:01:14.186278  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f6f55e13dca4b11540a90d88bc6a234cd4492de8bf23fd086660a9f2109d5878"
	I1013 22:01:12.719484  455618 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1013 22:01:12.719757  455618 start.go:159] libmachine.API.Create for "no-preload-080337" (driver="docker")
	I1013 22:01:12.719789  455618 client.go:168] LocalClient.Create starting
	I1013 22:01:12.719855  455618 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem
	I1013 22:01:12.719890  455618 main.go:141] libmachine: Decoding PEM data...
	I1013 22:01:12.719907  455618 main.go:141] libmachine: Parsing certificate...
	I1013 22:01:12.719958  455618 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21724-226873/.minikube/certs/cert.pem
	I1013 22:01:12.719986  455618 main.go:141] libmachine: Decoding PEM data...
	I1013 22:01:12.720023  455618 main.go:141] libmachine: Parsing certificate...
	I1013 22:01:12.720405  455618 cli_runner.go:164] Run: docker network inspect no-preload-080337 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1013 22:01:12.738748  455618 cli_runner.go:211] docker network inspect no-preload-080337 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1013 22:01:12.738820  455618 network_create.go:284] running [docker network inspect no-preload-080337] to gather additional debugging logs...
	I1013 22:01:12.738838  455618 cli_runner.go:164] Run: docker network inspect no-preload-080337
	W1013 22:01:12.756854  455618 cli_runner.go:211] docker network inspect no-preload-080337 returned with exit code 1
	I1013 22:01:12.756883  455618 network_create.go:287] error running [docker network inspect no-preload-080337]: docker network inspect no-preload-080337: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-080337 not found
	I1013 22:01:12.756898  455618 network_create.go:289] output of [docker network inspect no-preload-080337]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-080337 not found
	
	** /stderr **
	I1013 22:01:12.756982  455618 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 22:01:12.776576  455618 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7d83a8e6a805 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ea:69:47:54:f9:98} reservation:<nil>}
	I1013 22:01:12.777144  455618 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-35c0cecee577 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:f2:41:bc:f8:12:32} reservation:<nil>}
	I1013 22:01:12.777667  455618 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-2e951fbeb08e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ba:fb:be:51:da:97} reservation:<nil>}
	I1013 22:01:12.778234  455618 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-c946d4d0529a IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:ea:85:25:23:b8:8e} reservation:<nil>}
	I1013 22:01:12.778540  455618 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-41a0a7263ae4 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:06:f3:d9:f6:e7:45} reservation:<nil>}
	I1013 22:01:12.779167  455618 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00170a360}
	I1013 22:01:12.779190  455618 network_create.go:124] attempt to create docker network no-preload-080337 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1013 22:01:12.779237  455618 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-080337 no-preload-080337
	I1013 22:01:12.844688  455618 network_create.go:108] docker network no-preload-080337 192.168.94.0/24 created
	I1013 22:01:12.844727  455618 kic.go:121] calculated static IP "192.168.94.2" for the "no-preload-080337" container
	I1013 22:01:12.844791  455618 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1013 22:01:12.864807  455618 cli_runner.go:164] Run: docker volume create no-preload-080337 --label name.minikube.sigs.k8s.io=no-preload-080337 --label created_by.minikube.sigs.k8s.io=true
	I1013 22:01:12.879109  455618 cache.go:162] opening:  /home/jenkins/minikube-integration/21724-226873/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0
	I1013 22:01:12.884229  455618 oci.go:103] Successfully created a docker volume no-preload-080337
	I1013 22:01:12.884336  455618 cli_runner.go:164] Run: docker run --rm --name no-preload-080337-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-080337 --entrypoint /usr/bin/test -v no-preload-080337:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -d /var/lib
	I1013 22:01:12.888475  455618 cache.go:162] opening:  /home/jenkins/minikube-integration/21724-226873/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1
	I1013 22:01:12.906360  455618 cache.go:162] opening:  /home/jenkins/minikube-integration/21724-226873/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I1013 22:01:12.909970  455618 cache.go:162] opening:  /home/jenkins/minikube-integration/21724-226873/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1013 22:01:12.910390  455618 cache.go:162] opening:  /home/jenkins/minikube-integration/21724-226873/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1
	I1013 22:01:12.952605  455618 cache.go:162] opening:  /home/jenkins/minikube-integration/21724-226873/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1013 22:01:12.976630  455618 cache.go:157] /home/jenkins/minikube-integration/21724-226873/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1013 22:01:12.976656  455618 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21724-226873/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 283.6651ms
	I1013 22:01:12.976678  455618 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21724-226873/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1013 22:01:13.024664  455618 cache.go:162] opening:  /home/jenkins/minikube-integration/21724-226873/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1
	I1013 22:01:13.340986  455618 oci.go:107] Successfully prepared a docker volume no-preload-080337
	I1013 22:01:13.341036  455618 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	W1013 22:01:13.341152  455618 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1013 22:01:13.341194  455618 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1013 22:01:13.341243  455618 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1013 22:01:13.355705  455618 cache.go:157] /home/jenkins/minikube-integration/21724-226873/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1013 22:01:13.355741  455618 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21724-226873/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 662.775838ms
	I1013 22:01:13.355757  455618 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21724-226873/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1013 22:01:13.406243  455618 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-080337 --name no-preload-080337 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-080337 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-080337 --network no-preload-080337 --ip 192.168.94.2 --volume no-preload-080337:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225
	I1013 22:01:13.721099  455618 cli_runner.go:164] Run: docker container inspect no-preload-080337 --format={{.State.Running}}
	I1013 22:01:13.742424  455618 cli_runner.go:164] Run: docker container inspect no-preload-080337 --format={{.State.Status}}
	I1013 22:01:13.761761  455618 cli_runner.go:164] Run: docker exec no-preload-080337 stat /var/lib/dpkg/alternatives/iptables
	I1013 22:01:13.810431  455618 oci.go:144] the created container "no-preload-080337" has a running status.
	I1013 22:01:13.810467  455618 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21724-226873/.minikube/machines/no-preload-080337/id_rsa...
	I1013 22:01:14.095536  455618 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21724-226873/.minikube/machines/no-preload-080337/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1013 22:01:14.132504  455618 cli_runner.go:164] Run: docker container inspect no-preload-080337 --format={{.State.Status}}
	I1013 22:01:14.155920  455618 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1013 22:01:14.155945  455618 kic_runner.go:114] Args: [docker exec --privileged no-preload-080337 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1013 22:01:14.213318  455618 cli_runner.go:164] Run: docker container inspect no-preload-080337 --format={{.State.Status}}
	I1013 22:01:14.236628  455618 machine.go:93] provisionDockerMachine start ...
	I1013 22:01:14.236728  455618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-080337
	I1013 22:01:14.262506  455618 main.go:141] libmachine: Using SSH client type: native
	I1013 22:01:14.262869  455618 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33058 <nil> <nil>}
	I1013 22:01:14.262890  455618 main.go:141] libmachine: About to run SSH command:
	hostname
	I1013 22:01:14.360164  455618 cache.go:157] /home/jenkins/minikube-integration/21724-226873/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1013 22:01:14.360190  455618 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21724-226873/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 1.667159063s
	I1013 22:01:14.360212  455618 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21724-226873/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1013 22:01:14.421177  455618 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-080337
	
	I1013 22:01:14.421211  455618 ubuntu.go:182] provisioning hostname "no-preload-080337"
	I1013 22:01:14.421283  455618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-080337
	I1013 22:01:14.427247  455618 cache.go:157] /home/jenkins/minikube-integration/21724-226873/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1013 22:01:14.427280  455618 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21724-226873/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 1.73423639s
	I1013 22:01:14.427297  455618 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21724-226873/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1013 22:01:14.445366  455618 main.go:141] libmachine: Using SSH client type: native
	I1013 22:01:14.445636  455618 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33058 <nil> <nil>}
	I1013 22:01:14.445656  455618 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-080337 && echo "no-preload-080337" | sudo tee /etc/hostname
	I1013 22:01:14.509299  455618 cache.go:157] /home/jenkins/minikube-integration/21724-226873/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1013 22:01:14.509335  455618 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21724-226873/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 1.816433778s
	I1013 22:01:14.509352  455618 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21724-226873/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1013 22:01:14.554901  455618 cache.go:157] /home/jenkins/minikube-integration/21724-226873/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1013 22:01:14.554939  455618 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21724-226873/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 1.861842517s
	I1013 22:01:14.554955  455618 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21724-226873/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1013 22:01:14.674070  455618 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-080337
	
	I1013 22:01:14.674163  455618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-080337
	I1013 22:01:14.694294  455618 main.go:141] libmachine: Using SSH client type: native
	I1013 22:01:14.694522  455618 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33058 <nil> <nil>}
	I1013 22:01:14.694540  455618 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-080337' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-080337/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-080337' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1013 22:01:14.809401  455618 cache.go:157] /home/jenkins/minikube-integration/21724-226873/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1013 22:01:14.809428  455618 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21724-226873/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 2.116422465s
	I1013 22:01:14.809440  455618 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21724-226873/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1013 22:01:14.809455  455618 cache.go:87] Successfully saved all images to host disk.
	I1013 22:01:14.836879  455618 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 22:01:14.836916  455618 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21724-226873/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-226873/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-226873/.minikube}
	I1013 22:01:14.836952  455618 ubuntu.go:190] setting up certificates
	I1013 22:01:14.836963  455618 provision.go:84] configureAuth start
	I1013 22:01:14.837047  455618 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-080337
	I1013 22:01:14.855788  455618 provision.go:143] copyHostCerts
	I1013 22:01:14.855854  455618 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-226873/.minikube/ca.pem, removing ...
	I1013 22:01:14.855863  455618 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-226873/.minikube/ca.pem
	I1013 22:01:14.855933  455618 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-226873/.minikube/ca.pem (1078 bytes)
	I1013 22:01:14.856051  455618 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-226873/.minikube/cert.pem, removing ...
	I1013 22:01:14.856061  455618 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-226873/.minikube/cert.pem
	I1013 22:01:14.856091  455618 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-226873/.minikube/cert.pem (1123 bytes)
	I1013 22:01:14.856154  455618 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-226873/.minikube/key.pem, removing ...
	I1013 22:01:14.856161  455618 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-226873/.minikube/key.pem
	I1013 22:01:14.856183  455618 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-226873/.minikube/key.pem (1679 bytes)
	I1013 22:01:14.856239  455618 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-226873/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca-key.pem org=jenkins.no-preload-080337 san=[127.0.0.1 192.168.94.2 localhost minikube no-preload-080337]
	I1013 22:01:15.263888  455618 provision.go:177] copyRemoteCerts
	I1013 22:01:15.263958  455618 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1013 22:01:15.264015  455618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-080337
	I1013 22:01:15.283167  455618 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/no-preload-080337/id_rsa Username:docker}
	I1013 22:01:15.384762  455618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1013 22:01:15.406300  455618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1013 22:01:15.425691  455618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1013 22:01:15.443926  455618 provision.go:87] duration metric: took 606.94748ms to configureAuth
	I1013 22:01:15.443954  455618 ubuntu.go:206] setting minikube options for container-runtime
	I1013 22:01:15.444147  455618 config.go:182] Loaded profile config "no-preload-080337": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:01:15.444260  455618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-080337
	I1013 22:01:15.462194  455618 main.go:141] libmachine: Using SSH client type: native
	I1013 22:01:15.462418  455618 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33058 <nil> <nil>}
	I1013 22:01:15.462436  455618 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1013 22:01:15.718702  455618 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1013 22:01:15.718727  455618 machine.go:96] duration metric: took 1.482077202s to provisionDockerMachine
	I1013 22:01:15.718740  455618 client.go:171] duration metric: took 2.99894332s to LocalClient.Create
	I1013 22:01:15.718763  455618 start.go:167] duration metric: took 2.999001887s to libmachine.API.Create "no-preload-080337"
	I1013 22:01:15.718773  455618 start.go:293] postStartSetup for "no-preload-080337" (driver="docker")
	I1013 22:01:15.718786  455618 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1013 22:01:15.718847  455618 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1013 22:01:15.718936  455618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-080337
	I1013 22:01:15.737102  455618 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/no-preload-080337/id_rsa Username:docker}
	I1013 22:01:15.840392  455618 ssh_runner.go:195] Run: cat /etc/os-release
	I1013 22:01:15.844268  455618 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1013 22:01:15.844296  455618 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1013 22:01:15.844308  455618 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-226873/.minikube/addons for local assets ...
	I1013 22:01:15.844361  455618 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-226873/.minikube/files for local assets ...
	I1013 22:01:15.844434  455618 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-226873/.minikube/files/etc/ssl/certs/2309292.pem -> 2309292.pem in /etc/ssl/certs
	I1013 22:01:15.844521  455618 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1013 22:01:15.852821  455618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/files/etc/ssl/certs/2309292.pem --> /etc/ssl/certs/2309292.pem (1708 bytes)
	I1013 22:01:15.874943  455618 start.go:296] duration metric: took 156.151225ms for postStartSetup
	I1013 22:01:15.875438  455618 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-080337
	I1013 22:01:15.893727  455618 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/no-preload-080337/config.json ...
	I1013 22:01:15.894123  455618 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1013 22:01:15.894187  455618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-080337
	I1013 22:01:15.912872  455618 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/no-preload-080337/id_rsa Username:docker}
	I1013 22:01:16.008793  455618 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1013 22:01:16.014264  455618 start.go:128] duration metric: took 3.297837833s to createHost
	I1013 22:01:16.014294  455618 start.go:83] releasing machines lock for "no-preload-080337", held for 3.297977285s
	I1013 22:01:16.014376  455618 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-080337
	I1013 22:01:16.034487  455618 ssh_runner.go:195] Run: cat /version.json
	I1013 22:01:16.034546  455618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-080337
	I1013 22:01:16.034550  455618 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1013 22:01:16.034621  455618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-080337
	I1013 22:01:16.054410  455618 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/no-preload-080337/id_rsa Username:docker}
	I1013 22:01:16.055016  455618 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/no-preload-080337/id_rsa Username:docker}
	I1013 22:01:16.208964  455618 ssh_runner.go:195] Run: systemctl --version
	I1013 22:01:16.216376  455618 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1013 22:01:16.251861  455618 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1013 22:01:16.257008  455618 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1013 22:01:16.257081  455618 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1013 22:01:16.285321  455618 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1013 22:01:16.285347  455618 start.go:495] detecting cgroup driver to use...
	I1013 22:01:16.285382  455618 detect.go:190] detected "systemd" cgroup driver on host os
	I1013 22:01:16.285431  455618 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1013 22:01:16.302656  455618 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1013 22:01:16.316247  455618 docker.go:218] disabling cri-docker service (if available) ...
	I1013 22:01:16.316300  455618 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1013 22:01:16.333774  455618 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1013 22:01:16.352386  455618 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1013 22:01:16.435269  455618 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1013 22:01:16.533312  455618 docker.go:234] disabling docker service ...
	I1013 22:01:16.533381  455618 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1013 22:01:16.555692  455618 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1013 22:01:16.569638  455618 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1013 22:01:16.654646  455618 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1013 22:01:16.743001  455618 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1013 22:01:16.758429  455618 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1013 22:01:16.774818  455618 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1013 22:01:16.774880  455618 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:01:16.787597  455618 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1013 22:01:16.787674  455618 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:01:16.799709  455618 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:01:16.809425  455618 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:01:16.820402  455618 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1013 22:01:16.829931  455618 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:01:16.839718  455618 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:01:16.856362  455618 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:01:16.866658  455618 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1013 22:01:16.875659  455618 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1013 22:01:16.885289  455618 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:01:16.980354  455618 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1013 22:01:17.139793  455618 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1013 22:01:17.139850  455618 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1013 22:01:17.144134  455618 start.go:563] Will wait 60s for crictl version
	I1013 22:01:17.144215  455618 ssh_runner.go:195] Run: which crictl
	I1013 22:01:17.148203  455618 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1013 22:01:17.178841  455618 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1013 22:01:17.178954  455618 ssh_runner.go:195] Run: crio --version
	I1013 22:01:17.211125  455618 ssh_runner.go:195] Run: crio --version
	I1013 22:01:17.244531  455618 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1013 22:01:17.245939  455618 cli_runner.go:164] Run: docker network inspect no-preload-080337 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 22:01:17.266260  455618 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1013 22:01:17.271178  455618 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 22:01:17.282557  455618 kubeadm.go:883] updating cluster {Name:no-preload-080337 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-080337 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1013 22:01:17.282664  455618 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 22:01:17.282713  455618 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 22:01:17.312498  455618 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1013 22:01:17.312528  455618 cache_images.go:89] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1013 22:01:17.312611  455618 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1013 22:01:17.312631  455618 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1013 22:01:17.312629  455618 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1013 22:01:17.312611  455618 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1013 22:01:17.312660  455618 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1013 22:01:17.312652  455618 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1013 22:01:17.312634  455618 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1013 22:01:17.312676  455618 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1013 22:01:17.313920  455618 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1013 22:01:17.313930  455618 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1013 22:01:17.313921  455618 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1013 22:01:17.313948  455618 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1013 22:01:17.313921  455618 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1013 22:01:17.313978  455618 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1013 22:01:17.314053  455618 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1013 22:01:17.314326  455618 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1013 22:01:17.458766  455618 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.1
	I1013 22:01:17.472545  455618 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.1
	I1013 22:01:17.484397  455618 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.1
	I1013 22:01:17.485885  455618 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1013 22:01:17.495637  455618 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I1013 22:01:17.500032  455618 cache_images.go:117] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f" in container runtime
	I1013 22:01:17.500092  455618 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1013 22:01:17.500191  455618 ssh_runner.go:195] Run: which crictl
	I1013 22:01:16.461805  451152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:01:16.961617  451152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:01:17.462074  451152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:01:17.962405  451152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:01:18.462053  451152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:01:18.961857  451152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:01:19.462235  451152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:01:19.961645  451152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:01:20.462273  451152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:01:20.961418  451152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:01:16.728393  410447 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1013 22:01:16.728851  410447 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1013 22:01:16.728909  410447 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1013 22:01:16.728979  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1013 22:01:16.760273  410447 cri.go:89] found id: "2bca0d4b76f96c86a214254a1d7f4cc89b2ae472dc6171bf4ead501afcd93b4a"
	I1013 22:01:16.760296  410447 cri.go:89] found id: ""
	I1013 22:01:16.760306  410447 logs.go:282] 1 containers: [2bca0d4b76f96c86a214254a1d7f4cc89b2ae472dc6171bf4ead501afcd93b4a]
	I1013 22:01:16.760367  410447 ssh_runner.go:195] Run: which crictl
	I1013 22:01:16.764775  410447 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1013 22:01:16.764858  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1013 22:01:16.794937  410447 cri.go:89] found id: ""
	I1013 22:01:16.794966  410447 logs.go:282] 0 containers: []
	W1013 22:01:16.794977  410447 logs.go:284] No container was found matching "etcd"
	I1013 22:01:16.794984  410447 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1013 22:01:16.795057  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1013 22:01:16.824913  410447 cri.go:89] found id: ""
	I1013 22:01:16.824944  410447 logs.go:282] 0 containers: []
	W1013 22:01:16.824956  410447 logs.go:284] No container was found matching "coredns"
	I1013 22:01:16.824965  410447 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1013 22:01:16.825045  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1013 22:01:16.855159  410447 cri.go:89] found id: "d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110"
	I1013 22:01:16.855185  410447 cri.go:89] found id: ""
	I1013 22:01:16.855196  410447 logs.go:282] 1 containers: [d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110]
	I1013 22:01:16.855247  410447 ssh_runner.go:195] Run: which crictl
	I1013 22:01:16.859596  410447 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1013 22:01:16.859671  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1013 22:01:16.888498  410447 cri.go:89] found id: ""
	I1013 22:01:16.888529  410447 logs.go:282] 0 containers: []
	W1013 22:01:16.888541  410447 logs.go:284] No container was found matching "kube-proxy"
	I1013 22:01:16.888556  410447 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1013 22:01:16.888626  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1013 22:01:16.922093  410447 cri.go:89] found id: "6bb08d6a13040e205988dc98d566c74f87d564ee6fc94f5c64022735afe6609d"
	I1013 22:01:16.922129  410447 cri.go:89] found id: "f6f55e13dca4b11540a90d88bc6a234cd4492de8bf23fd086660a9f2109d5878"
	I1013 22:01:16.922135  410447 cri.go:89] found id: ""
	I1013 22:01:16.922146  410447 logs.go:282] 2 containers: [6bb08d6a13040e205988dc98d566c74f87d564ee6fc94f5c64022735afe6609d f6f55e13dca4b11540a90d88bc6a234cd4492de8bf23fd086660a9f2109d5878]
	I1013 22:01:16.922212  410447 ssh_runner.go:195] Run: which crictl
	I1013 22:01:16.928913  410447 ssh_runner.go:195] Run: which crictl
	I1013 22:01:16.933567  410447 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1013 22:01:16.933651  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1013 22:01:16.964378  410447 cri.go:89] found id: ""
	I1013 22:01:16.964409  410447 logs.go:282] 0 containers: []
	W1013 22:01:16.964420  410447 logs.go:284] No container was found matching "kindnet"
	I1013 22:01:16.964427  410447 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1013 22:01:16.964492  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1013 22:01:16.997816  410447 cri.go:89] found id: ""
	I1013 22:01:16.997851  410447 logs.go:282] 0 containers: []
	W1013 22:01:16.997863  410447 logs.go:284] No container was found matching "storage-provisioner"
	I1013 22:01:16.997879  410447 logs.go:123] Gathering logs for dmesg ...
	I1013 22:01:16.997895  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1013 22:01:17.015483  410447 logs.go:123] Gathering logs for describe nodes ...
	I1013 22:01:17.015520  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1013 22:01:17.079467  410447 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1013 22:01:17.079494  410447 logs.go:123] Gathering logs for kube-scheduler [d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110] ...
	I1013 22:01:17.079509  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110"
	I1013 22:01:17.135008  410447 logs.go:123] Gathering logs for kube-controller-manager [f6f55e13dca4b11540a90d88bc6a234cd4492de8bf23fd086660a9f2109d5878] ...
	I1013 22:01:17.135050  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f6f55e13dca4b11540a90d88bc6a234cd4492de8bf23fd086660a9f2109d5878"
	I1013 22:01:17.171128  410447 logs.go:123] Gathering logs for CRI-O ...
	I1013 22:01:17.171161  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1013 22:01:17.234894  410447 logs.go:123] Gathering logs for container status ...
	I1013 22:01:17.234931  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1013 22:01:17.272180  410447 logs.go:123] Gathering logs for kubelet ...
	I1013 22:01:17.272230  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1013 22:01:17.362966  410447 logs.go:123] Gathering logs for kube-apiserver [2bca0d4b76f96c86a214254a1d7f4cc89b2ae472dc6171bf4ead501afcd93b4a] ...
	I1013 22:01:17.363012  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2bca0d4b76f96c86a214254a1d7f4cc89b2ae472dc6171bf4ead501afcd93b4a"
	I1013 22:01:17.399548  410447 logs.go:123] Gathering logs for kube-controller-manager [6bb08d6a13040e205988dc98d566c74f87d564ee6fc94f5c64022735afe6609d] ...
	I1013 22:01:17.399578  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6bb08d6a13040e205988dc98d566c74f87d564ee6fc94f5c64022735afe6609d"
	I1013 22:01:19.929052  410447 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1013 22:01:19.929583  410447 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1013 22:01:19.929646  410447 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1013 22:01:19.929725  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1013 22:01:19.962010  410447 cri.go:89] found id: "2bca0d4b76f96c86a214254a1d7f4cc89b2ae472dc6171bf4ead501afcd93b4a"
	I1013 22:01:19.962034  410447 cri.go:89] found id: ""
	I1013 22:01:19.962046  410447 logs.go:282] 1 containers: [2bca0d4b76f96c86a214254a1d7f4cc89b2ae472dc6171bf4ead501afcd93b4a]
	I1013 22:01:19.962102  410447 ssh_runner.go:195] Run: which crictl
	I1013 22:01:19.967310  410447 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1013 22:01:19.967392  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1013 22:01:20.005821  410447 cri.go:89] found id: ""
	I1013 22:01:20.005851  410447 logs.go:282] 0 containers: []
	W1013 22:01:20.005874  410447 logs.go:284] No container was found matching "etcd"
	I1013 22:01:20.005883  410447 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1013 22:01:20.006003  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1013 22:01:20.039277  410447 cri.go:89] found id: ""
	I1013 22:01:20.039309  410447 logs.go:282] 0 containers: []
	W1013 22:01:20.039321  410447 logs.go:284] No container was found matching "coredns"
	I1013 22:01:20.039330  410447 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1013 22:01:20.039391  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1013 22:01:20.077084  410447 cri.go:89] found id: "d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110"
	I1013 22:01:20.077114  410447 cri.go:89] found id: ""
	I1013 22:01:20.077125  410447 logs.go:282] 1 containers: [d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110]
	I1013 22:01:20.077201  410447 ssh_runner.go:195] Run: which crictl
	I1013 22:01:20.082036  410447 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1013 22:01:20.082112  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1013 22:01:20.118470  410447 cri.go:89] found id: ""
	I1013 22:01:20.118498  410447 logs.go:282] 0 containers: []
	W1013 22:01:20.118506  410447 logs.go:284] No container was found matching "kube-proxy"
	I1013 22:01:20.118512  410447 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1013 22:01:20.118557  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1013 22:01:20.150299  410447 cri.go:89] found id: "6bb08d6a13040e205988dc98d566c74f87d564ee6fc94f5c64022735afe6609d"
	I1013 22:01:20.150324  410447 cri.go:89] found id: "f6f55e13dca4b11540a90d88bc6a234cd4492de8bf23fd086660a9f2109d5878"
	I1013 22:01:20.150329  410447 cri.go:89] found id: ""
	I1013 22:01:20.150339  410447 logs.go:282] 2 containers: [6bb08d6a13040e205988dc98d566c74f87d564ee6fc94f5c64022735afe6609d f6f55e13dca4b11540a90d88bc6a234cd4492de8bf23fd086660a9f2109d5878]
	I1013 22:01:20.150449  410447 ssh_runner.go:195] Run: which crictl
	I1013 22:01:20.155055  410447 ssh_runner.go:195] Run: which crictl
	I1013 22:01:20.159465  410447 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1013 22:01:20.159539  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1013 22:01:20.191386  410447 cri.go:89] found id: ""
	I1013 22:01:20.191417  410447 logs.go:282] 0 containers: []
	W1013 22:01:20.191428  410447 logs.go:284] No container was found matching "kindnet"
	I1013 22:01:20.191434  410447 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1013 22:01:20.191497  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1013 22:01:20.223467  410447 cri.go:89] found id: ""
	I1013 22:01:20.223497  410447 logs.go:282] 0 containers: []
	W1013 22:01:20.223509  410447 logs.go:284] No container was found matching "storage-provisioner"
	I1013 22:01:20.223526  410447 logs.go:123] Gathering logs for kube-controller-manager [f6f55e13dca4b11540a90d88bc6a234cd4492de8bf23fd086660a9f2109d5878] ...
	I1013 22:01:20.223541  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f6f55e13dca4b11540a90d88bc6a234cd4492de8bf23fd086660a9f2109d5878"
	I1013 22:01:20.253594  410447 logs.go:123] Gathering logs for CRI-O ...
	I1013 22:01:20.253627  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1013 22:01:20.314546  410447 logs.go:123] Gathering logs for container status ...
	I1013 22:01:20.314589  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1013 22:01:20.351241  410447 logs.go:123] Gathering logs for dmesg ...
	I1013 22:01:20.351280  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1013 22:01:20.367288  410447 logs.go:123] Gathering logs for describe nodes ...
	I1013 22:01:20.367319  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1013 22:01:20.442430  410447 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1013 22:01:20.442456  410447 logs.go:123] Gathering logs for kube-controller-manager [6bb08d6a13040e205988dc98d566c74f87d564ee6fc94f5c64022735afe6609d] ...
	I1013 22:01:20.442471  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6bb08d6a13040e205988dc98d566c74f87d564ee6fc94f5c64022735afe6609d"
	I1013 22:01:20.475232  410447 logs.go:123] Gathering logs for kubelet ...
	I1013 22:01:20.475272  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1013 22:01:20.600086  410447 logs.go:123] Gathering logs for kube-apiserver [2bca0d4b76f96c86a214254a1d7f4cc89b2ae472dc6171bf4ead501afcd93b4a] ...
	I1013 22:01:20.600130  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2bca0d4b76f96c86a214254a1d7f4cc89b2ae472dc6171bf4ead501afcd93b4a"
	I1013 22:01:20.639504  410447 logs.go:123] Gathering logs for kube-scheduler [d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110] ...
	I1013 22:01:20.639541  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110"
	I1013 22:01:17.514890  455618 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.4-0
	I1013 22:01:17.514893  455618 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.1
	I1013 22:01:17.519960  455618 cache_images.go:117] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813" in container runtime
	I1013 22:01:17.520017  455618 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1013 22:01:17.520061  455618 ssh_runner.go:195] Run: which crictl
	I1013 22:01:17.535323  455618 cache_images.go:117] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97" in container runtime
	I1013 22:01:17.535384  455618 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1013 22:01:17.535439  455618 ssh_runner.go:195] Run: which crictl
	I1013 22:01:17.535543  455618 cache_images.go:117] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1013 22:01:17.535575  455618 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1013 22:01:17.535603  455618 ssh_runner.go:195] Run: which crictl
	I1013 22:01:17.544291  455618 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1013 22:01:17.544372  455618 cache_images.go:117] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969" in container runtime
	I1013 22:01:17.544421  455618 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1013 22:01:17.544466  455618 ssh_runner.go:195] Run: which crictl
	I1013 22:01:17.562325  455618 cache_images.go:117] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7" in container runtime
	I1013 22:01:17.562377  455618 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1013 22:01:17.562396  455618 cache_images.go:117] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115" in container runtime
	I1013 22:01:17.562428  455618 ssh_runner.go:195] Run: which crictl
	I1013 22:01:17.562434  455618 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1013 22:01:17.562465  455618 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1013 22:01:17.562475  455618 ssh_runner.go:195] Run: which crictl
	I1013 22:01:17.562501  455618 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1013 22:01:17.562532  455618 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1013 22:01:17.577422  455618 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1013 22:01:17.577539  455618 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1013 22:01:17.577565  455618 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1013 22:01:17.577614  455618 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1013 22:01:17.599778  455618 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1013 22:01:17.599955  455618 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1013 22:01:17.602347  455618 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1013 22:01:17.618436  455618 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1013 22:01:17.618577  455618 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1013 22:01:17.618669  455618 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1013 22:01:17.618765  455618 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1013 22:01:17.643010  455618 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1013 22:01:17.643142  455618 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1013 22:01:17.643142  455618 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1013 22:01:17.657630  455618 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21724-226873/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1013 22:01:17.657747  455618 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1013 22:01:17.657794  455618 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1013 22:01:17.657749  455618 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1013 22:01:17.657919  455618 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1013 22:01:17.680823  455618 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21724-226873/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1
	I1013 22:01:17.681008  455618 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1013 22:01:17.683216  455618 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21724-226873/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1013 22:01:17.683222  455618 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21724-226873/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1
	I1013 22:01:17.683317  455618 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1013 22:01:17.683317  455618 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1013 22:01:17.694427  455618 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1013 22:01:17.694453  455618 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21724-226873/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0
	I1013 22:01:17.694459  455618 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21724-226873/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1
	I1013 22:01:17.694467  455618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (22831104 bytes)
	I1013 22:01:17.694521  455618 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21724-226873/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I1013 22:01:17.694539  455618 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1013 22:01:17.694573  455618 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1013 22:01:17.694540  455618 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1013 22:01:17.694604  455618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (27073024 bytes)
	I1013 22:01:17.694616  455618 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1013 22:01:17.694585  455618 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1013 22:01:17.694631  455618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1013 22:01:17.694658  455618 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1013 22:01:17.694671  455618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (17396736 bytes)
	I1013 22:01:17.703762  455618 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1013 22:01:17.703806  455618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (74320896 bytes)
	I1013 22:01:17.704173  455618 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1013 22:01:17.704204  455618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (22394368 bytes)
	I1013 22:01:17.713620  455618 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1013 22:01:17.713657  455618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (25966080 bytes)
	I1013 22:01:17.766486  455618 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1013 22:01:17.766556  455618 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1013 22:01:17.770646  455618 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1013 22:01:18.254496  455618 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21724-226873/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 from cache
	I1013 22:01:18.254547  455618 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1013 22:01:18.254568  455618 cache_images.go:117] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1013 22:01:18.254611  455618 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1013 22:01:18.254615  455618 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1013 22:01:18.254703  455618 ssh_runner.go:195] Run: which crictl
	I1013 22:01:18.260338  455618 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1013 22:01:19.396954  455618 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1: (1.142302082s)
	I1013 22:01:19.397012  455618 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21724-226873/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1013 22:01:19.397009  455618 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.136623994s)
	I1013 22:01:19.397045  455618 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1013 22:01:19.397081  455618 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1013 22:01:19.397083  455618 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1013 22:01:19.424661  455618 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1013 22:01:20.665792  455618 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.268674668s)
	I1013 22:01:20.665830  455618 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21724-226873/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1013 22:01:20.665856  455618 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1013 22:01:20.665867  455618 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.241172113s)
	I1013 22:01:20.665905  455618 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1
	I1013 22:01:20.665922  455618 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21724-226873/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1013 22:01:20.666062  455618 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1013 22:01:21.953458  455618 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (1.287525458s)
	I1013 22:01:21.953484  455618 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.28740329s)
	I1013 22:01:21.953494  455618 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21724-226873/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1013 22:01:21.953510  455618 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1013 22:01:21.953534  455618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1013 22:01:21.953532  455618 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1013 22:01:21.953655  455618 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1013 22:01:21.461634  451152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:01:21.961791  451152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:01:22.462120  451152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:01:22.961431  451152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:01:23.043445  451152 kubeadm.go:1113] duration metric: took 12.155054629s to wait for elevateKubeSystemPrivileges
	I1013 22:01:23.043486  451152 kubeadm.go:402] duration metric: took 22.192914415s to StartCluster
	I1013 22:01:23.043510  451152 settings.go:142] acquiring lock: {Name:mk13008e3b2fce0e368bddbf00d43b8340210d33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:01:23.043587  451152 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-226873/kubeconfig
	I1013 22:01:23.045331  451152 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/kubeconfig: {Name:mk2f336b13d09ff6e6da9e86905651541ce51ca0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:01:23.045595  451152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1013 22:01:23.045607  451152 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1013 22:01:23.045668  451152 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1013 22:01:23.045775  451152 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-534822"
	I1013 22:01:23.045795  451152 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-534822"
	I1013 22:01:23.045831  451152 host.go:66] Checking if "old-k8s-version-534822" exists ...
	I1013 22:01:23.045887  451152 config.go:182] Loaded profile config "old-k8s-version-534822": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1013 22:01:23.045941  451152 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-534822"
	I1013 22:01:23.045958  451152 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-534822"
	I1013 22:01:23.046309  451152 cli_runner.go:164] Run: docker container inspect old-k8s-version-534822 --format={{.State.Status}}
	I1013 22:01:23.046451  451152 cli_runner.go:164] Run: docker container inspect old-k8s-version-534822 --format={{.State.Status}}
	I1013 22:01:23.049150  451152 out.go:179] * Verifying Kubernetes components...
	I1013 22:01:23.050812  451152 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:01:23.072816  451152 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1013 22:01:23.073038  451152 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-534822"
	I1013 22:01:23.073088  451152 host.go:66] Checking if "old-k8s-version-534822" exists ...
	I1013 22:01:23.073563  451152 cli_runner.go:164] Run: docker container inspect old-k8s-version-534822 --format={{.State.Status}}
	I1013 22:01:23.074157  451152 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 22:01:23.074178  451152 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1013 22:01:23.074230  451152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-534822
	I1013 22:01:23.103666  451152 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/old-k8s-version-534822/id_rsa Username:docker}
	I1013 22:01:23.105281  451152 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1013 22:01:23.105311  451152 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1013 22:01:23.105371  451152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-534822
	I1013 22:01:23.130392  451152 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/old-k8s-version-534822/id_rsa Username:docker}
	I1013 22:01:23.160845  451152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1013 22:01:23.207899  451152 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 22:01:23.228045  451152 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 22:01:23.252476  451152 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1013 22:01:23.566842  451152 start.go:976] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1013 22:01:23.568317  451152 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-534822" to be "Ready" ...
	I1013 22:01:23.882305  451152 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1013 22:01:23.883436  451152 addons.go:514] duration metric: took 837.768062ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1013 22:01:24.071978  451152 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-534822" context rescaled to 1 replicas
	W1013 22:01:25.572416  451152 node_ready.go:57] node "old-k8s-version-534822" has "Ready":"False" status (will retry)
	I1013 22:01:23.198088  410447 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1013 22:01:23.198579  410447 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1013 22:01:23.198642  410447 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1013 22:01:23.198699  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1013 22:01:23.239024  410447 cri.go:89] found id: "2bca0d4b76f96c86a214254a1d7f4cc89b2ae472dc6171bf4ead501afcd93b4a"
	I1013 22:01:23.239046  410447 cri.go:89] found id: ""
	I1013 22:01:23.239056  410447 logs.go:282] 1 containers: [2bca0d4b76f96c86a214254a1d7f4cc89b2ae472dc6171bf4ead501afcd93b4a]
	I1013 22:01:23.239128  410447 ssh_runner.go:195] Run: which crictl
	I1013 22:01:23.244659  410447 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1013 22:01:23.244770  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1013 22:01:23.285459  410447 cri.go:89] found id: ""
	I1013 22:01:23.285492  410447 logs.go:282] 0 containers: []
	W1013 22:01:23.285505  410447 logs.go:284] No container was found matching "etcd"
	I1013 22:01:23.285514  410447 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1013 22:01:23.285584  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1013 22:01:23.325722  410447 cri.go:89] found id: ""
	I1013 22:01:23.325750  410447 logs.go:282] 0 containers: []
	W1013 22:01:23.325758  410447 logs.go:284] No container was found matching "coredns"
	I1013 22:01:23.325765  410447 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1013 22:01:23.325812  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1013 22:01:23.372374  410447 cri.go:89] found id: "d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110"
	I1013 22:01:23.372399  410447 cri.go:89] found id: ""
	I1013 22:01:23.372409  410447 logs.go:282] 1 containers: [d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110]
	I1013 22:01:23.372466  410447 ssh_runner.go:195] Run: which crictl
	I1013 22:01:23.378924  410447 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1013 22:01:23.379025  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1013 22:01:23.423337  410447 cri.go:89] found id: ""
	I1013 22:01:23.423369  410447 logs.go:282] 0 containers: []
	W1013 22:01:23.423380  410447 logs.go:284] No container was found matching "kube-proxy"
	I1013 22:01:23.423388  410447 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1013 22:01:23.423449  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1013 22:01:23.467667  410447 cri.go:89] found id: "6bb08d6a13040e205988dc98d566c74f87d564ee6fc94f5c64022735afe6609d"
	I1013 22:01:23.467692  410447 cri.go:89] found id: ""
	I1013 22:01:23.467703  410447 logs.go:282] 1 containers: [6bb08d6a13040e205988dc98d566c74f87d564ee6fc94f5c64022735afe6609d]
	I1013 22:01:23.467778  410447 ssh_runner.go:195] Run: which crictl
	I1013 22:01:23.473484  410447 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1013 22:01:23.473562  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1013 22:01:23.506689  410447 cri.go:89] found id: ""
	I1013 22:01:23.506723  410447 logs.go:282] 0 containers: []
	W1013 22:01:23.506741  410447 logs.go:284] No container was found matching "kindnet"
	I1013 22:01:23.506750  410447 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1013 22:01:23.506816  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1013 22:01:23.540472  410447 cri.go:89] found id: ""
	I1013 22:01:23.540498  410447 logs.go:282] 0 containers: []
	W1013 22:01:23.540506  410447 logs.go:284] No container was found matching "storage-provisioner"
	I1013 22:01:23.540516  410447 logs.go:123] Gathering logs for CRI-O ...
	I1013 22:01:23.540526  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1013 22:01:23.617484  410447 logs.go:123] Gathering logs for container status ...
	I1013 22:01:23.617533  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1013 22:01:23.663114  410447 logs.go:123] Gathering logs for kubelet ...
	I1013 22:01:23.663156  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1013 22:01:23.757202  410447 logs.go:123] Gathering logs for dmesg ...
	I1013 22:01:23.757250  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1013 22:01:23.772664  410447 logs.go:123] Gathering logs for describe nodes ...
	I1013 22:01:23.772693  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1013 22:01:23.847868  410447 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1013 22:01:23.847895  410447 logs.go:123] Gathering logs for kube-apiserver [2bca0d4b76f96c86a214254a1d7f4cc89b2ae472dc6171bf4ead501afcd93b4a] ...
	I1013 22:01:23.847921  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2bca0d4b76f96c86a214254a1d7f4cc89b2ae472dc6171bf4ead501afcd93b4a"
	I1013 22:01:23.888858  410447 logs.go:123] Gathering logs for kube-scheduler [d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110] ...
	I1013 22:01:23.888888  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110"
	I1013 22:01:23.951505  410447 logs.go:123] Gathering logs for kube-controller-manager [6bb08d6a13040e205988dc98d566c74f87d564ee6fc94f5c64022735afe6609d] ...
	I1013 22:01:23.951545  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6bb08d6a13040e205988dc98d566c74f87d564ee6fc94f5c64022735afe6609d"
	I1013 22:01:26.488067  410447 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1013 22:01:26.488514  410447 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1013 22:01:26.488568  410447 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1013 22:01:26.488626  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1013 22:01:26.519383  410447 cri.go:89] found id: "2bca0d4b76f96c86a214254a1d7f4cc89b2ae472dc6171bf4ead501afcd93b4a"
	I1013 22:01:26.519408  410447 cri.go:89] found id: ""
	I1013 22:01:26.519420  410447 logs.go:282] 1 containers: [2bca0d4b76f96c86a214254a1d7f4cc89b2ae472dc6171bf4ead501afcd93b4a]
	I1013 22:01:26.519475  410447 ssh_runner.go:195] Run: which crictl
	I1013 22:01:26.523797  410447 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1013 22:01:26.523864  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1013 22:01:23.861420  455618 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.907735068s)
	I1013 22:01:23.861528  455618 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21724-226873/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1013 22:01:23.861587  455618 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1013 22:01:23.861653  455618 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1
	I1013 22:01:25.217183  455618 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1: (1.355497462s)
	I1013 22:01:25.217222  455618 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21724-226873/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1013 22:01:25.217261  455618 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1013 22:01:25.217352  455618 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0
	W1013 22:01:28.072198  451152 node_ready.go:57] node "old-k8s-version-534822" has "Ready":"False" status (will retry)
	W1013 22:01:30.072434  451152 node_ready.go:57] node "old-k8s-version-534822" has "Ready":"False" status (will retry)
	I1013 22:01:26.556588  410447 cri.go:89] found id: ""
	I1013 22:01:26.556617  410447 logs.go:282] 0 containers: []
	W1013 22:01:26.556628  410447 logs.go:284] No container was found matching "etcd"
	I1013 22:01:26.556637  410447 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1013 22:01:26.556698  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1013 22:01:26.590610  410447 cri.go:89] found id: ""
	I1013 22:01:26.590641  410447 logs.go:282] 0 containers: []
	W1013 22:01:26.590653  410447 logs.go:284] No container was found matching "coredns"
	I1013 22:01:26.590661  410447 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1013 22:01:26.590720  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1013 22:01:26.625320  410447 cri.go:89] found id: "d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110"
	I1013 22:01:26.625345  410447 cri.go:89] found id: ""
	I1013 22:01:26.625355  410447 logs.go:282] 1 containers: [d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110]
	I1013 22:01:26.625415  410447 ssh_runner.go:195] Run: which crictl
	I1013 22:01:26.630421  410447 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1013 22:01:26.630523  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1013 22:01:26.667219  410447 cri.go:89] found id: ""
	I1013 22:01:26.667247  410447 logs.go:282] 0 containers: []
	W1013 22:01:26.667259  410447 logs.go:284] No container was found matching "kube-proxy"
	I1013 22:01:26.667266  410447 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1013 22:01:26.667328  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1013 22:01:26.708951  410447 cri.go:89] found id: "6bb08d6a13040e205988dc98d566c74f87d564ee6fc94f5c64022735afe6609d"
	I1013 22:01:26.708980  410447 cri.go:89] found id: ""
	I1013 22:01:26.709003  410447 logs.go:282] 1 containers: [6bb08d6a13040e205988dc98d566c74f87d564ee6fc94f5c64022735afe6609d]
	I1013 22:01:26.709104  410447 ssh_runner.go:195] Run: which crictl
	I1013 22:01:26.714673  410447 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1013 22:01:26.714755  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1013 22:01:26.749522  410447 cri.go:89] found id: ""
	I1013 22:01:26.749553  410447 logs.go:282] 0 containers: []
	W1013 22:01:26.749565  410447 logs.go:284] No container was found matching "kindnet"
	I1013 22:01:26.749574  410447 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1013 22:01:26.749635  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1013 22:01:26.784920  410447 cri.go:89] found id: ""
	I1013 22:01:26.784950  410447 logs.go:282] 0 containers: []
	W1013 22:01:26.784961  410447 logs.go:284] No container was found matching "storage-provisioner"
	I1013 22:01:26.784979  410447 logs.go:123] Gathering logs for kube-scheduler [d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110] ...
	I1013 22:01:26.785011  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110"
	I1013 22:01:26.851303  410447 logs.go:123] Gathering logs for kube-controller-manager [6bb08d6a13040e205988dc98d566c74f87d564ee6fc94f5c64022735afe6609d] ...
	I1013 22:01:26.851341  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6bb08d6a13040e205988dc98d566c74f87d564ee6fc94f5c64022735afe6609d"
	I1013 22:01:26.885174  410447 logs.go:123] Gathering logs for CRI-O ...
	I1013 22:01:26.885204  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1013 22:01:26.954511  410447 logs.go:123] Gathering logs for container status ...
	I1013 22:01:26.954555  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1013 22:01:26.990548  410447 logs.go:123] Gathering logs for kubelet ...
	I1013 22:01:26.990579  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1013 22:01:27.102608  410447 logs.go:123] Gathering logs for dmesg ...
	I1013 22:01:27.102651  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1013 22:01:27.120811  410447 logs.go:123] Gathering logs for describe nodes ...
	I1013 22:01:27.120848  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1013 22:01:27.184304  410447 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1013 22:01:27.184333  410447 logs.go:123] Gathering logs for kube-apiserver [2bca0d4b76f96c86a214254a1d7f4cc89b2ae472dc6171bf4ead501afcd93b4a] ...
	I1013 22:01:27.184352  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2bca0d4b76f96c86a214254a1d7f4cc89b2ae472dc6171bf4ead501afcd93b4a"
	I1013 22:01:29.721061  410447 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1013 22:01:29.721447  410447 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1013 22:01:29.721515  410447 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1013 22:01:29.721579  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1013 22:01:29.752139  410447 cri.go:89] found id: "2bca0d4b76f96c86a214254a1d7f4cc89b2ae472dc6171bf4ead501afcd93b4a"
	I1013 22:01:29.752161  410447 cri.go:89] found id: ""
	I1013 22:01:29.752172  410447 logs.go:282] 1 containers: [2bca0d4b76f96c86a214254a1d7f4cc89b2ae472dc6171bf4ead501afcd93b4a]
	I1013 22:01:29.752230  410447 ssh_runner.go:195] Run: which crictl
	I1013 22:01:29.757517  410447 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1013 22:01:29.757585  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1013 22:01:29.789969  410447 cri.go:89] found id: ""
	I1013 22:01:29.790009  410447 logs.go:282] 0 containers: []
	W1013 22:01:29.790020  410447 logs.go:284] No container was found matching "etcd"
	I1013 22:01:29.790029  410447 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1013 22:01:29.790089  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1013 22:01:29.830316  410447 cri.go:89] found id: ""
	I1013 22:01:29.830344  410447 logs.go:282] 0 containers: []
	W1013 22:01:29.830356  410447 logs.go:284] No container was found matching "coredns"
	I1013 22:01:29.830364  410447 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1013 22:01:29.830439  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1013 22:01:29.870303  410447 cri.go:89] found id: "d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110"
	I1013 22:01:29.870326  410447 cri.go:89] found id: ""
	I1013 22:01:29.870337  410447 logs.go:282] 1 containers: [d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110]
	I1013 22:01:29.870394  410447 ssh_runner.go:195] Run: which crictl
	I1013 22:01:29.876168  410447 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1013 22:01:29.876243  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1013 22:01:29.909698  410447 cri.go:89] found id: ""
	I1013 22:01:29.909725  410447 logs.go:282] 0 containers: []
	W1013 22:01:29.909733  410447 logs.go:284] No container was found matching "kube-proxy"
	I1013 22:01:29.909739  410447 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1013 22:01:29.909799  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1013 22:01:29.939756  410447 cri.go:89] found id: "6bb08d6a13040e205988dc98d566c74f87d564ee6fc94f5c64022735afe6609d"
	I1013 22:01:29.939783  410447 cri.go:89] found id: ""
	I1013 22:01:29.939795  410447 logs.go:282] 1 containers: [6bb08d6a13040e205988dc98d566c74f87d564ee6fc94f5c64022735afe6609d]
	I1013 22:01:29.939856  410447 ssh_runner.go:195] Run: which crictl
	I1013 22:01:29.944491  410447 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1013 22:01:29.944561  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1013 22:01:29.973256  410447 cri.go:89] found id: ""
	I1013 22:01:29.973283  410447 logs.go:282] 0 containers: []
	W1013 22:01:29.973291  410447 logs.go:284] No container was found matching "kindnet"
	I1013 22:01:29.973296  410447 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1013 22:01:29.973361  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1013 22:01:30.004520  410447 cri.go:89] found id: ""
	I1013 22:01:30.004546  410447 logs.go:282] 0 containers: []
	W1013 22:01:30.004556  410447 logs.go:284] No container was found matching "storage-provisioner"
	I1013 22:01:30.004566  410447 logs.go:123] Gathering logs for kubelet ...
	I1013 22:01:30.004580  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1013 22:01:30.103206  410447 logs.go:123] Gathering logs for dmesg ...
	I1013 22:01:30.103251  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1013 22:01:30.119852  410447 logs.go:123] Gathering logs for describe nodes ...
	I1013 22:01:30.119886  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1013 22:01:30.190721  410447 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1013 22:01:30.190743  410447 logs.go:123] Gathering logs for kube-apiserver [2bca0d4b76f96c86a214254a1d7f4cc89b2ae472dc6171bf4ead501afcd93b4a] ...
	I1013 22:01:30.190756  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2bca0d4b76f96c86a214254a1d7f4cc89b2ae472dc6171bf4ead501afcd93b4a"
	I1013 22:01:30.228370  410447 logs.go:123] Gathering logs for kube-scheduler [d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110] ...
	I1013 22:01:30.228401  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110"
	I1013 22:01:30.289631  410447 logs.go:123] Gathering logs for kube-controller-manager [6bb08d6a13040e205988dc98d566c74f87d564ee6fc94f5c64022735afe6609d] ...
	I1013 22:01:30.289679  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6bb08d6a13040e205988dc98d566c74f87d564ee6fc94f5c64022735afe6609d"
	I1013 22:01:30.320739  410447 logs.go:123] Gathering logs for CRI-O ...
	I1013 22:01:30.320772  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1013 22:01:30.381587  410447 logs.go:123] Gathering logs for container status ...
	I1013 22:01:30.381620  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1013 22:01:29.144202  455618 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0: (3.926821687s)
	I1013 22:01:29.144229  455618 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21724-226873/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1013 22:01:29.144250  455618 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1013 22:01:29.144288  455618 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1013 22:01:29.678684  455618 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21724-226873/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1013 22:01:29.678731  455618 cache_images.go:124] Successfully loaded all cached images
	I1013 22:01:29.678739  455618 cache_images.go:93] duration metric: took 12.366197275s to LoadCachedImages
	I1013 22:01:29.678755  455618 kubeadm.go:934] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1013 22:01:29.678866  455618 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-080337 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-080337 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1013 22:01:29.678948  455618 ssh_runner.go:195] Run: crio config
	I1013 22:01:29.727219  455618 cni.go:84] Creating CNI manager for ""
	I1013 22:01:29.727243  455618 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 22:01:29.727262  455618 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1013 22:01:29.727291  455618 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-080337 NodeName:no-preload-080337 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1013 22:01:29.727422  455618 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-080337"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1013 22:01:29.727484  455618 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1013 22:01:29.736306  455618 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1013 22:01:29.736357  455618 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1013 22:01:29.745104  455618 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
	I1013 22:01:29.745189  455618 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1013 22:01:29.745196  455618 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/21724-226873/.minikube/cache/linux/amd64/v1.34.1/kubelet
	I1013 22:01:29.745231  455618 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21724-226873/.minikube/cache/linux/amd64/v1.34.1/kubeadm
	I1013 22:01:29.750245  455618 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1013 22:01:29.750281  455618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/cache/linux/amd64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (60559544 bytes)
	I1013 22:01:30.974091  455618 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 22:01:30.987900  455618 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1013 22:01:30.991892  455618 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1013 22:01:30.991929  455618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/cache/linux/amd64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (59195684 bytes)
	I1013 22:01:31.084822  455618 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1013 22:01:31.090309  455618 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1013 22:01:31.090348  455618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/cache/linux/amd64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (74027192 bytes)
	I1013 22:01:31.298020  455618 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1013 22:01:31.306111  455618 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1013 22:01:31.319130  455618 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1013 22:01:31.334917  455618 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1013 22:01:31.348412  455618 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1013 22:01:31.352351  455618 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 22:01:31.362903  455618 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:01:31.450514  455618 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 22:01:31.482493  455618 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/no-preload-080337 for IP: 192.168.94.2
	I1013 22:01:31.482518  455618 certs.go:195] generating shared ca certs ...
	I1013 22:01:31.482540  455618 certs.go:227] acquiring lock for ca certs: {Name:mk5abdb742abbab05bc35d961f97579f54806d56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:01:31.482751  455618 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-226873/.minikube/ca.key
	I1013 22:01:31.482821  455618 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-226873/.minikube/proxy-client-ca.key
	I1013 22:01:31.482836  455618 certs.go:257] generating profile certs ...
	I1013 22:01:31.482914  455618 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/no-preload-080337/client.key
	I1013 22:01:31.482933  455618 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/no-preload-080337/client.crt with IP's: []
	I1013 22:01:32.172595  455618 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/no-preload-080337/client.crt ...
	I1013 22:01:32.172634  455618 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/no-preload-080337/client.crt: {Name:mk2d69d87872e68968faf82160090e2d39dc8853 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:01:32.172816  455618 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/no-preload-080337/client.key ...
	I1013 22:01:32.172828  455618 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/no-preload-080337/client.key: {Name:mk630e0f9107404f674d84f6616025bc9a2251c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:01:32.172916  455618 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/no-preload-080337/apiserver.key.7644baed
	I1013 22:01:32.172933  455618 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/no-preload-080337/apiserver.crt.7644baed with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1013 22:01:32.452908  455618 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/no-preload-080337/apiserver.crt.7644baed ...
	I1013 22:01:32.452938  455618 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/no-preload-080337/apiserver.crt.7644baed: {Name:mkf65c91d7749f861f020e2234c1e1c37acc71ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:01:32.453115  455618 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/no-preload-080337/apiserver.key.7644baed ...
	I1013 22:01:32.453132  455618 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/no-preload-080337/apiserver.key.7644baed: {Name:mk801093f8d8662dfa4f11cfbdfd3a61bac94d29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:01:32.453210  455618 certs.go:382] copying /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/no-preload-080337/apiserver.crt.7644baed -> /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/no-preload-080337/apiserver.crt
	I1013 22:01:32.453285  455618 certs.go:386] copying /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/no-preload-080337/apiserver.key.7644baed -> /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/no-preload-080337/apiserver.key
	I1013 22:01:32.453340  455618 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/no-preload-080337/proxy-client.key
	I1013 22:01:32.453351  455618 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/no-preload-080337/proxy-client.crt with IP's: []
	I1013 22:01:32.839880  455618 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/no-preload-080337/proxy-client.crt ...
	I1013 22:01:32.839909  455618 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/no-preload-080337/proxy-client.crt: {Name:mkeb5c5bab82557c1b9b24516c23a0237718ba16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:01:32.840111  455618 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/no-preload-080337/proxy-client.key ...
	I1013 22:01:32.840128  455618 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/no-preload-080337/proxy-client.key: {Name:mkfa0c91d2c774b16a058e38acd11c5933443094 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:01:32.840303  455618 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/230929.pem (1338 bytes)
	W1013 22:01:32.840343  455618 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-226873/.minikube/certs/230929_empty.pem, impossibly tiny 0 bytes
	I1013 22:01:32.840354  455618 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca-key.pem (1675 bytes)
	I1013 22:01:32.840382  455618 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem (1078 bytes)
	I1013 22:01:32.840406  455618 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/cert.pem (1123 bytes)
	I1013 22:01:32.840427  455618 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/key.pem (1679 bytes)
	I1013 22:01:32.840463  455618 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/files/etc/ssl/certs/2309292.pem (1708 bytes)
	I1013 22:01:32.841052  455618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1013 22:01:32.860690  455618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1013 22:01:32.879247  455618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1013 22:01:32.897654  455618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1013 22:01:32.916086  455618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/no-preload-080337/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1013 22:01:32.936294  455618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/no-preload-080337/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1013 22:01:32.960331  455618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/no-preload-080337/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1013 22:01:32.982300  455618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/no-preload-080337/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1013 22:01:33.003648  455618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/certs/230929.pem --> /usr/share/ca-certificates/230929.pem (1338 bytes)
	I1013 22:01:33.034287  455618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/files/etc/ssl/certs/2309292.pem --> /usr/share/ca-certificates/2309292.pem (1708 bytes)
	I1013 22:01:33.055237  455618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1013 22:01:33.075028  455618 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1013 22:01:33.090469  455618 ssh_runner.go:195] Run: openssl version
	I1013 22:01:33.097257  455618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/230929.pem && ln -fs /usr/share/ca-certificates/230929.pem /etc/ssl/certs/230929.pem"
	I1013 22:01:33.106877  455618 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/230929.pem
	I1013 22:01:33.111857  455618 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 13 21:24 /usr/share/ca-certificates/230929.pem
	I1013 22:01:33.111925  455618 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/230929.pem
	I1013 22:01:33.157813  455618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/230929.pem /etc/ssl/certs/51391683.0"
	I1013 22:01:33.168570  455618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2309292.pem && ln -fs /usr/share/ca-certificates/2309292.pem /etc/ssl/certs/2309292.pem"
	I1013 22:01:33.179385  455618 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2309292.pem
	I1013 22:01:33.184059  455618 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 13 21:24 /usr/share/ca-certificates/2309292.pem
	I1013 22:01:33.184125  455618 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2309292.pem
	I1013 22:01:33.222779  455618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2309292.pem /etc/ssl/certs/3ec20f2e.0"
	I1013 22:01:33.233596  455618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1013 22:01:33.242663  455618 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:01:33.247352  455618 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 13 21:18 /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:01:33.247412  455618 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:01:33.283396  455618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1013 22:01:33.293402  455618 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1013 22:01:33.298305  455618 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1013 22:01:33.298365  455618 kubeadm.go:400] StartCluster: {Name:no-preload-080337 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-080337 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 22:01:33.298457  455618 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 22:01:33.298515  455618 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 22:01:33.330588  455618 cri.go:89] found id: ""
	I1013 22:01:33.330659  455618 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1013 22:01:33.339501  455618 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1013 22:01:33.348167  455618 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1013 22:01:33.348228  455618 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1013 22:01:33.356846  455618 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1013 22:01:33.356864  455618 kubeadm.go:157] found existing configuration files:
	
	I1013 22:01:33.356907  455618 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1013 22:01:33.366072  455618 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1013 22:01:33.366124  455618 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1013 22:01:33.374245  455618 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1013 22:01:33.383411  455618 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1013 22:01:33.383481  455618 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1013 22:01:33.392065  455618 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1013 22:01:33.400961  455618 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1013 22:01:33.401045  455618 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1013 22:01:33.409366  455618 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1013 22:01:33.418621  455618 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1013 22:01:33.418673  455618 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1013 22:01:33.426541  455618 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1013 22:01:33.465321  455618 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1013 22:01:33.465390  455618 kubeadm.go:318] [preflight] Running pre-flight checks
	I1013 22:01:33.488315  455618 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1013 22:01:33.488432  455618 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1013 22:01:33.488488  455618 kubeadm.go:318] OS: Linux
	I1013 22:01:33.488558  455618 kubeadm.go:318] CGROUPS_CPU: enabled
	I1013 22:01:33.488673  455618 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1013 22:01:33.488764  455618 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1013 22:01:33.488860  455618 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1013 22:01:33.488977  455618 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1013 22:01:33.489090  455618 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1013 22:01:33.489176  455618 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1013 22:01:33.489248  455618 kubeadm.go:318] CGROUPS_IO: enabled
	I1013 22:01:33.552965  455618 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1013 22:01:33.553130  455618 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1013 22:01:33.553284  455618 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1013 22:01:33.568303  455618 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1013 22:01:32.571147  451152 node_ready.go:57] node "old-k8s-version-534822" has "Ready":"False" status (will retry)
	W1013 22:01:34.572333  451152 node_ready.go:57] node "old-k8s-version-534822" has "Ready":"False" status (will retry)
	I1013 22:01:32.918061  410447 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1013 22:01:32.918531  410447 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1013 22:01:32.918593  410447 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1013 22:01:32.918651  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1013 22:01:32.951105  410447 cri.go:89] found id: "2bca0d4b76f96c86a214254a1d7f4cc89b2ae472dc6171bf4ead501afcd93b4a"
	I1013 22:01:32.951132  410447 cri.go:89] found id: ""
	I1013 22:01:32.951142  410447 logs.go:282] 1 containers: [2bca0d4b76f96c86a214254a1d7f4cc89b2ae472dc6171bf4ead501afcd93b4a]
	I1013 22:01:32.951208  410447 ssh_runner.go:195] Run: which crictl
	I1013 22:01:32.956317  410447 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1013 22:01:32.956388  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1013 22:01:32.991438  410447 cri.go:89] found id: ""
	I1013 22:01:32.991466  410447 logs.go:282] 0 containers: []
	W1013 22:01:32.991477  410447 logs.go:284] No container was found matching "etcd"
	I1013 22:01:32.991485  410447 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1013 22:01:32.991559  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1013 22:01:33.022193  410447 cri.go:89] found id: ""
	I1013 22:01:33.022226  410447 logs.go:282] 0 containers: []
	W1013 22:01:33.022237  410447 logs.go:284] No container was found matching "coredns"
	I1013 22:01:33.022244  410447 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1013 22:01:33.022309  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1013 22:01:33.052158  410447 cri.go:89] found id: "d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110"
	I1013 22:01:33.052182  410447 cri.go:89] found id: ""
	I1013 22:01:33.052192  410447 logs.go:282] 1 containers: [d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110]
	I1013 22:01:33.052274  410447 ssh_runner.go:195] Run: which crictl
	I1013 22:01:33.056701  410447 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1013 22:01:33.056774  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1013 22:01:33.086259  410447 cri.go:89] found id: ""
	I1013 22:01:33.086290  410447 logs.go:282] 0 containers: []
	W1013 22:01:33.086301  410447 logs.go:284] No container was found matching "kube-proxy"
	I1013 22:01:33.086308  410447 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1013 22:01:33.086367  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1013 22:01:33.115758  410447 cri.go:89] found id: "6bb08d6a13040e205988dc98d566c74f87d564ee6fc94f5c64022735afe6609d"
	I1013 22:01:33.115782  410447 cri.go:89] found id: ""
	I1013 22:01:33.115793  410447 logs.go:282] 1 containers: [6bb08d6a13040e205988dc98d566c74f87d564ee6fc94f5c64022735afe6609d]
	I1013 22:01:33.115854  410447 ssh_runner.go:195] Run: which crictl
	I1013 22:01:33.119622  410447 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1013 22:01:33.119695  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1013 22:01:33.151538  410447 cri.go:89] found id: ""
	I1013 22:01:33.151565  410447 logs.go:282] 0 containers: []
	W1013 22:01:33.151577  410447 logs.go:284] No container was found matching "kindnet"
	I1013 22:01:33.151585  410447 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1013 22:01:33.151647  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1013 22:01:33.181830  410447 cri.go:89] found id: ""
	I1013 22:01:33.181862  410447 logs.go:282] 0 containers: []
	W1013 22:01:33.181874  410447 logs.go:284] No container was found matching "storage-provisioner"
	I1013 22:01:33.181886  410447 logs.go:123] Gathering logs for kube-controller-manager [6bb08d6a13040e205988dc98d566c74f87d564ee6fc94f5c64022735afe6609d] ...
	I1013 22:01:33.181899  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6bb08d6a13040e205988dc98d566c74f87d564ee6fc94f5c64022735afe6609d"
	I1013 22:01:33.210080  410447 logs.go:123] Gathering logs for CRI-O ...
	I1013 22:01:33.210110  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1013 22:01:33.267491  410447 logs.go:123] Gathering logs for container status ...
	I1013 22:01:33.267526  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1013 22:01:33.301048  410447 logs.go:123] Gathering logs for kubelet ...
	I1013 22:01:33.301077  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1013 22:01:33.394041  410447 logs.go:123] Gathering logs for dmesg ...
	I1013 22:01:33.394072  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1013 22:01:33.409799  410447 logs.go:123] Gathering logs for describe nodes ...
	I1013 22:01:33.409828  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1013 22:01:33.471640  410447 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1013 22:01:33.471661  410447 logs.go:123] Gathering logs for kube-apiserver [2bca0d4b76f96c86a214254a1d7f4cc89b2ae472dc6171bf4ead501afcd93b4a] ...
	I1013 22:01:33.471673  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2bca0d4b76f96c86a214254a1d7f4cc89b2ae472dc6171bf4ead501afcd93b4a"
	I1013 22:01:33.507424  410447 logs.go:123] Gathering logs for kube-scheduler [d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110] ...
	I1013 22:01:33.507461  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110"
	I1013 22:01:36.073429  410447 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1013 22:01:36.073802  410447 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1013 22:01:36.073850  410447 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1013 22:01:36.073899  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1013 22:01:36.103491  410447 cri.go:89] found id: "2bca0d4b76f96c86a214254a1d7f4cc89b2ae472dc6171bf4ead501afcd93b4a"
	I1013 22:01:36.103511  410447 cri.go:89] found id: ""
	I1013 22:01:36.103520  410447 logs.go:282] 1 containers: [2bca0d4b76f96c86a214254a1d7f4cc89b2ae472dc6171bf4ead501afcd93b4a]
	I1013 22:01:36.103589  410447 ssh_runner.go:195] Run: which crictl
	I1013 22:01:36.107785  410447 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1013 22:01:36.107844  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1013 22:01:36.139818  410447 cri.go:89] found id: ""
	I1013 22:01:36.139847  410447 logs.go:282] 0 containers: []
	W1013 22:01:36.139858  410447 logs.go:284] No container was found matching "etcd"
	I1013 22:01:36.139866  410447 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1013 22:01:36.139927  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1013 22:01:36.175087  410447 cri.go:89] found id: ""
	I1013 22:01:36.175113  410447 logs.go:282] 0 containers: []
	W1013 22:01:36.175121  410447 logs.go:284] No container was found matching "coredns"
	I1013 22:01:36.175127  410447 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1013 22:01:36.175175  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1013 22:01:36.204303  410447 cri.go:89] found id: "d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110"
	I1013 22:01:36.204329  410447 cri.go:89] found id: ""
	I1013 22:01:36.204341  410447 logs.go:282] 1 containers: [d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110]
	I1013 22:01:36.204406  410447 ssh_runner.go:195] Run: which crictl
	I1013 22:01:36.208986  410447 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1013 22:01:36.209078  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1013 22:01:36.238246  410447 cri.go:89] found id: ""
	I1013 22:01:36.238273  410447 logs.go:282] 0 containers: []
	W1013 22:01:36.238281  410447 logs.go:284] No container was found matching "kube-proxy"
	I1013 22:01:36.238286  410447 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1013 22:01:36.238347  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1013 22:01:36.268846  410447 cri.go:89] found id: "6bb08d6a13040e205988dc98d566c74f87d564ee6fc94f5c64022735afe6609d"
	I1013 22:01:36.268866  410447 cri.go:89] found id: ""
	I1013 22:01:36.268874  410447 logs.go:282] 1 containers: [6bb08d6a13040e205988dc98d566c74f87d564ee6fc94f5c64022735afe6609d]
	I1013 22:01:36.268920  410447 ssh_runner.go:195] Run: which crictl
	I1013 22:01:36.273078  410447 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1013 22:01:36.273142  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1013 22:01:36.302151  410447 cri.go:89] found id: ""
	I1013 22:01:36.302176  410447 logs.go:282] 0 containers: []
	W1013 22:01:36.302184  410447 logs.go:284] No container was found matching "kindnet"
	I1013 22:01:36.302189  410447 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1013 22:01:36.302239  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1013 22:01:36.331977  410447 cri.go:89] found id: ""
	I1013 22:01:36.332021  410447 logs.go:282] 0 containers: []
	W1013 22:01:36.332033  410447 logs.go:284] No container was found matching "storage-provisioner"
	I1013 22:01:36.332045  410447 logs.go:123] Gathering logs for container status ...
	I1013 22:01:36.332060  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1013 22:01:36.364960  410447 logs.go:123] Gathering logs for kubelet ...
	I1013 22:01:36.365043  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1013 22:01:36.471577  410447 logs.go:123] Gathering logs for dmesg ...
	I1013 22:01:36.471619  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1013 22:01:36.490029  410447 logs.go:123] Gathering logs for describe nodes ...
	I1013 22:01:36.490060  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1013 22:01:33.571296  455618 out.go:252]   - Generating certificates and keys ...
	I1013 22:01:33.571413  455618 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1013 22:01:33.571516  455618 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1013 22:01:33.653327  455618 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1013 22:01:33.928781  455618 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1013 22:01:34.076680  455618 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1013 22:01:34.267382  455618 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1013 22:01:34.559093  455618 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1013 22:01:34.559267  455618 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-080337] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1013 22:01:34.668015  455618 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1013 22:01:34.668155  455618 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-080337] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1013 22:01:35.395462  455618 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1013 22:01:36.030240  455618 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1013 22:01:36.395610  455618 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1013 22:01:36.395726  455618 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1013 22:01:36.758338  455618 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1013 22:01:36.974160  455618 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1013 22:01:37.065423  455618 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1013 22:01:37.325692  455618 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1013 22:01:37.430235  455618 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1013 22:01:37.430830  455618 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1013 22:01:37.435182  455618 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1013 22:01:37.437650  455618 out.go:252]   - Booting up control plane ...
	I1013 22:01:37.437788  455618 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1013 22:01:37.437882  455618 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1013 22:01:37.437982  455618 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1013 22:01:37.456170  455618 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1013 22:01:37.456269  455618 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1013 22:01:37.463078  455618 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1013 22:01:37.463310  455618 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1013 22:01:37.463361  455618 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1013 22:01:36.571575  451152 node_ready.go:49] node "old-k8s-version-534822" is "Ready"
	I1013 22:01:36.571612  451152 node_ready.go:38] duration metric: took 13.003263148s for node "old-k8s-version-534822" to be "Ready" ...
	I1013 22:01:36.571631  451152 api_server.go:52] waiting for apiserver process to appear ...
	I1013 22:01:36.571685  451152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 22:01:36.585087  451152 api_server.go:72] duration metric: took 13.539444435s to wait for apiserver process to appear ...
	I1013 22:01:36.585112  451152 api_server.go:88] waiting for apiserver healthz status ...
	I1013 22:01:36.585132  451152 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1013 22:01:36.589690  451152 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1013 22:01:36.591114  451152 api_server.go:141] control plane version: v1.28.0
	I1013 22:01:36.591140  451152 api_server.go:131] duration metric: took 6.021575ms to wait for apiserver health ...
	I1013 22:01:36.591149  451152 system_pods.go:43] waiting for kube-system pods to appear ...
	I1013 22:01:36.594932  451152 system_pods.go:59] 8 kube-system pods found
	I1013 22:01:36.594977  451152 system_pods.go:61] "coredns-5dd5756b68-wx29h" [782e61d5-3652-4825-815d-3cbbe7a1e5f8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 22:01:36.594987  451152 system_pods.go:61] "etcd-old-k8s-version-534822" [d1b2c571-5ade-4694-bbfa-0bd874028e83] Running
	I1013 22:01:36.595018  451152 system_pods.go:61] "kindnet-snc6w" [22c86c71-69cc-4b6a-b850-b737d719fd82] Running
	I1013 22:01:36.595025  451152 system_pods.go:61] "kube-apiserver-old-k8s-version-534822" [ff38c5b2-f2b8-4d19-8103-3ef187e0553c] Running
	I1013 22:01:36.595032  451152 system_pods.go:61] "kube-controller-manager-old-k8s-version-534822" [1d630c3c-912e-4835-8873-45b97671983f] Running
	I1013 22:01:36.595039  451152 system_pods.go:61] "kube-proxy-dvt68" [be84538f-9c85-4223-8e1f-c017d85bf13a] Running
	I1013 22:01:36.595048  451152 system_pods.go:61] "kube-scheduler-old-k8s-version-534822" [1356f6cf-0f80-49f7-8eb2-ba9d543a4775] Running
	I1013 22:01:36.595057  451152 system_pods.go:61] "storage-provisioner" [25d4b2c1-7e52-4aa4-8812-c88200601898] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 22:01:36.595070  451152 system_pods.go:74] duration metric: took 3.915111ms to wait for pod list to return data ...
	I1013 22:01:36.595087  451152 default_sa.go:34] waiting for default service account to be created ...
	I1013 22:01:36.597731  451152 default_sa.go:45] found service account: "default"
	I1013 22:01:36.597760  451152 default_sa.go:55] duration metric: took 2.663496ms for default service account to be created ...
	I1013 22:01:36.597772  451152 system_pods.go:116] waiting for k8s-apps to be running ...
	I1013 22:01:36.601424  451152 system_pods.go:86] 8 kube-system pods found
	I1013 22:01:36.601450  451152 system_pods.go:89] "coredns-5dd5756b68-wx29h" [782e61d5-3652-4825-815d-3cbbe7a1e5f8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 22:01:36.601455  451152 system_pods.go:89] "etcd-old-k8s-version-534822" [d1b2c571-5ade-4694-bbfa-0bd874028e83] Running
	I1013 22:01:36.601461  451152 system_pods.go:89] "kindnet-snc6w" [22c86c71-69cc-4b6a-b850-b737d719fd82] Running
	I1013 22:01:36.601464  451152 system_pods.go:89] "kube-apiserver-old-k8s-version-534822" [ff38c5b2-f2b8-4d19-8103-3ef187e0553c] Running
	I1013 22:01:36.601468  451152 system_pods.go:89] "kube-controller-manager-old-k8s-version-534822" [1d630c3c-912e-4835-8873-45b97671983f] Running
	I1013 22:01:36.601471  451152 system_pods.go:89] "kube-proxy-dvt68" [be84538f-9c85-4223-8e1f-c017d85bf13a] Running
	I1013 22:01:36.601474  451152 system_pods.go:89] "kube-scheduler-old-k8s-version-534822" [1356f6cf-0f80-49f7-8eb2-ba9d543a4775] Running
	I1013 22:01:36.601480  451152 system_pods.go:89] "storage-provisioner" [25d4b2c1-7e52-4aa4-8812-c88200601898] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 22:01:36.601506  451152 retry.go:31] will retry after 265.261918ms: missing components: kube-dns
	I1013 22:01:36.870751  451152 system_pods.go:86] 8 kube-system pods found
	I1013 22:01:36.870795  451152 system_pods.go:89] "coredns-5dd5756b68-wx29h" [782e61d5-3652-4825-815d-3cbbe7a1e5f8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 22:01:36.870804  451152 system_pods.go:89] "etcd-old-k8s-version-534822" [d1b2c571-5ade-4694-bbfa-0bd874028e83] Running
	I1013 22:01:36.870812  451152 system_pods.go:89] "kindnet-snc6w" [22c86c71-69cc-4b6a-b850-b737d719fd82] Running
	I1013 22:01:36.870820  451152 system_pods.go:89] "kube-apiserver-old-k8s-version-534822" [ff38c5b2-f2b8-4d19-8103-3ef187e0553c] Running
	I1013 22:01:36.870826  451152 system_pods.go:89] "kube-controller-manager-old-k8s-version-534822" [1d630c3c-912e-4835-8873-45b97671983f] Running
	I1013 22:01:36.870832  451152 system_pods.go:89] "kube-proxy-dvt68" [be84538f-9c85-4223-8e1f-c017d85bf13a] Running
	I1013 22:01:36.870841  451152 system_pods.go:89] "kube-scheduler-old-k8s-version-534822" [1356f6cf-0f80-49f7-8eb2-ba9d543a4775] Running
	I1013 22:01:36.870849  451152 system_pods.go:89] "storage-provisioner" [25d4b2c1-7e52-4aa4-8812-c88200601898] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 22:01:36.870870  451152 retry.go:31] will retry after 302.381071ms: missing components: kube-dns
	I1013 22:01:37.177959  451152 system_pods.go:86] 8 kube-system pods found
	I1013 22:01:37.178006  451152 system_pods.go:89] "coredns-5dd5756b68-wx29h" [782e61d5-3652-4825-815d-3cbbe7a1e5f8] Running
	I1013 22:01:37.178016  451152 system_pods.go:89] "etcd-old-k8s-version-534822" [d1b2c571-5ade-4694-bbfa-0bd874028e83] Running
	I1013 22:01:37.178021  451152 system_pods.go:89] "kindnet-snc6w" [22c86c71-69cc-4b6a-b850-b737d719fd82] Running
	I1013 22:01:37.178026  451152 system_pods.go:89] "kube-apiserver-old-k8s-version-534822" [ff38c5b2-f2b8-4d19-8103-3ef187e0553c] Running
	I1013 22:01:37.178032  451152 system_pods.go:89] "kube-controller-manager-old-k8s-version-534822" [1d630c3c-912e-4835-8873-45b97671983f] Running
	I1013 22:01:37.178037  451152 system_pods.go:89] "kube-proxy-dvt68" [be84538f-9c85-4223-8e1f-c017d85bf13a] Running
	I1013 22:01:37.178043  451152 system_pods.go:89] "kube-scheduler-old-k8s-version-534822" [1356f6cf-0f80-49f7-8eb2-ba9d543a4775] Running
	I1013 22:01:37.178047  451152 system_pods.go:89] "storage-provisioner" [25d4b2c1-7e52-4aa4-8812-c88200601898] Running
	I1013 22:01:37.178059  451152 system_pods.go:126] duration metric: took 580.278978ms to wait for k8s-apps to be running ...
	I1013 22:01:37.178069  451152 system_svc.go:44] waiting for kubelet service to be running ....
	I1013 22:01:37.178123  451152 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 22:01:37.192190  451152 system_svc.go:56] duration metric: took 14.111794ms WaitForService to wait for kubelet
	I1013 22:01:37.192228  451152 kubeadm.go:586] duration metric: took 14.146591124s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 22:01:37.192254  451152 node_conditions.go:102] verifying NodePressure condition ...
	I1013 22:01:37.195308  451152 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1013 22:01:37.195335  451152 node_conditions.go:123] node cpu capacity is 8
	I1013 22:01:37.195350  451152 node_conditions.go:105] duration metric: took 3.090293ms to run NodePressure ...
	I1013 22:01:37.195363  451152 start.go:241] waiting for startup goroutines ...
	I1013 22:01:37.195370  451152 start.go:246] waiting for cluster config update ...
	I1013 22:01:37.195379  451152 start.go:255] writing updated cluster config ...
	I1013 22:01:37.195623  451152 ssh_runner.go:195] Run: rm -f paused
	I1013 22:01:37.200366  451152 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 22:01:37.204454  451152 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-wx29h" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:01:37.208745  451152 pod_ready.go:94] pod "coredns-5dd5756b68-wx29h" is "Ready"
	I1013 22:01:37.208768  451152 pod_ready.go:86] duration metric: took 4.294743ms for pod "coredns-5dd5756b68-wx29h" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:01:37.211594  451152 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-534822" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:01:37.215297  451152 pod_ready.go:94] pod "etcd-old-k8s-version-534822" is "Ready"
	I1013 22:01:37.215327  451152 pod_ready.go:86] duration metric: took 3.708482ms for pod "etcd-old-k8s-version-534822" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:01:37.217797  451152 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-534822" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:01:37.221439  451152 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-534822" is "Ready"
	I1013 22:01:37.221460  451152 pod_ready.go:86] duration metric: took 3.639073ms for pod "kube-apiserver-old-k8s-version-534822" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:01:37.223944  451152 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-534822" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:01:37.604670  451152 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-534822" is "Ready"
	I1013 22:01:37.604698  451152 pod_ready.go:86] duration metric: took 380.735741ms for pod "kube-controller-manager-old-k8s-version-534822" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:01:37.805975  451152 pod_ready.go:83] waiting for pod "kube-proxy-dvt68" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:01:38.204882  451152 pod_ready.go:94] pod "kube-proxy-dvt68" is "Ready"
	I1013 22:01:38.204917  451152 pod_ready.go:86] duration metric: took 398.893828ms for pod "kube-proxy-dvt68" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:01:38.405031  451152 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-534822" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:01:38.804764  451152 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-534822" is "Ready"
	I1013 22:01:38.804792  451152 pod_ready.go:86] duration metric: took 399.734211ms for pod "kube-scheduler-old-k8s-version-534822" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:01:38.804803  451152 pod_ready.go:40] duration metric: took 1.604398532s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 22:01:38.861604  451152 start.go:624] kubectl: 1.34.1, cluster: 1.28.0 (minor skew: 6)
	I1013 22:01:38.863533  451152 out.go:203] 
	W1013 22:01:38.864951  451152 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.28.0.
	I1013 22:01:38.866255  451152 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1013 22:01:38.867928  451152 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-534822" cluster and "default" namespace by default
	W1013 22:01:36.562481  410447 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1013 22:01:36.562505  410447 logs.go:123] Gathering logs for kube-apiserver [2bca0d4b76f96c86a214254a1d7f4cc89b2ae472dc6171bf4ead501afcd93b4a] ...
	I1013 22:01:36.562520  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2bca0d4b76f96c86a214254a1d7f4cc89b2ae472dc6171bf4ead501afcd93b4a"
	I1013 22:01:36.602034  410447 logs.go:123] Gathering logs for kube-scheduler [d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110] ...
	I1013 22:01:36.602063  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110"
	I1013 22:01:36.653948  410447 logs.go:123] Gathering logs for kube-controller-manager [6bb08d6a13040e205988dc98d566c74f87d564ee6fc94f5c64022735afe6609d] ...
	I1013 22:01:36.653984  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6bb08d6a13040e205988dc98d566c74f87d564ee6fc94f5c64022735afe6609d"
	I1013 22:01:36.684563  410447 logs.go:123] Gathering logs for CRI-O ...
	I1013 22:01:36.684592  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1013 22:01:39.244905  410447 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1013 22:01:39.245410  410447 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1013 22:01:39.245465  410447 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1013 22:01:39.245530  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1013 22:01:39.277514  410447 cri.go:89] found id: "2bca0d4b76f96c86a214254a1d7f4cc89b2ae472dc6171bf4ead501afcd93b4a"
	I1013 22:01:39.277540  410447 cri.go:89] found id: ""
	I1013 22:01:39.277551  410447 logs.go:282] 1 containers: [2bca0d4b76f96c86a214254a1d7f4cc89b2ae472dc6171bf4ead501afcd93b4a]
	I1013 22:01:39.277608  410447 ssh_runner.go:195] Run: which crictl
	I1013 22:01:39.282099  410447 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1013 22:01:39.282168  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1013 22:01:39.310079  410447 cri.go:89] found id: ""
	I1013 22:01:39.310109  410447 logs.go:282] 0 containers: []
	W1013 22:01:39.310119  410447 logs.go:284] No container was found matching "etcd"
	I1013 22:01:39.310127  410447 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1013 22:01:39.310196  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1013 22:01:39.340072  410447 cri.go:89] found id: ""
	I1013 22:01:39.340102  410447 logs.go:282] 0 containers: []
	W1013 22:01:39.340114  410447 logs.go:284] No container was found matching "coredns"
	I1013 22:01:39.340121  410447 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1013 22:01:39.340190  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1013 22:01:39.374904  410447 cri.go:89] found id: "d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110"
	I1013 22:01:39.374939  410447 cri.go:89] found id: ""
	I1013 22:01:39.374950  410447 logs.go:282] 1 containers: [d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110]
	I1013 22:01:39.375048  410447 ssh_runner.go:195] Run: which crictl
	I1013 22:01:39.379672  410447 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1013 22:01:39.379756  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1013 22:01:39.415290  410447 cri.go:89] found id: ""
	I1013 22:01:39.415321  410447 logs.go:282] 0 containers: []
	W1013 22:01:39.415333  410447 logs.go:284] No container was found matching "kube-proxy"
	I1013 22:01:39.415342  410447 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1013 22:01:39.415409  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1013 22:01:39.451304  410447 cri.go:89] found id: "6bb08d6a13040e205988dc98d566c74f87d564ee6fc94f5c64022735afe6609d"
	I1013 22:01:39.451330  410447 cri.go:89] found id: ""
	I1013 22:01:39.451340  410447 logs.go:282] 1 containers: [6bb08d6a13040e205988dc98d566c74f87d564ee6fc94f5c64022735afe6609d]
	I1013 22:01:39.451401  410447 ssh_runner.go:195] Run: which crictl
	I1013 22:01:39.456431  410447 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1013 22:01:39.456508  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1013 22:01:39.491711  410447 cri.go:89] found id: ""
	I1013 22:01:39.491739  410447 logs.go:282] 0 containers: []
	W1013 22:01:39.491750  410447 logs.go:284] No container was found matching "kindnet"
	I1013 22:01:39.491758  410447 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1013 22:01:39.491823  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1013 22:01:39.524694  410447 cri.go:89] found id: ""
	I1013 22:01:39.524717  410447 logs.go:282] 0 containers: []
	W1013 22:01:39.524728  410447 logs.go:284] No container was found matching "storage-provisioner"
	I1013 22:01:39.524748  410447 logs.go:123] Gathering logs for kubelet ...
	I1013 22:01:39.524764  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1013 22:01:39.637623  410447 logs.go:123] Gathering logs for dmesg ...
	I1013 22:01:39.637665  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1013 22:01:39.658512  410447 logs.go:123] Gathering logs for describe nodes ...
	I1013 22:01:39.658540  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1013 22:01:39.732739  410447 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1013 22:01:39.732761  410447 logs.go:123] Gathering logs for kube-apiserver [2bca0d4b76f96c86a214254a1d7f4cc89b2ae472dc6171bf4ead501afcd93b4a] ...
	I1013 22:01:39.732776  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2bca0d4b76f96c86a214254a1d7f4cc89b2ae472dc6171bf4ead501afcd93b4a"
	I1013 22:01:39.771324  410447 logs.go:123] Gathering logs for kube-scheduler [d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110] ...
	I1013 22:01:39.771361  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110"
	I1013 22:01:39.836651  410447 logs.go:123] Gathering logs for kube-controller-manager [6bb08d6a13040e205988dc98d566c74f87d564ee6fc94f5c64022735afe6609d] ...
	I1013 22:01:39.836703  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6bb08d6a13040e205988dc98d566c74f87d564ee6fc94f5c64022735afe6609d"
	I1013 22:01:39.871906  410447 logs.go:123] Gathering logs for CRI-O ...
	I1013 22:01:39.871934  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1013 22:01:39.933497  410447 logs.go:123] Gathering logs for container status ...
	I1013 22:01:39.933543  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1013 22:01:37.567083  455618 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1013 22:01:37.567241  455618 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1013 22:01:38.067719  455618 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 500.907806ms
	I1013 22:01:38.070569  455618 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1013 22:01:38.070695  455618 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1013 22:01:38.070803  455618 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1013 22:01:38.070929  455618 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1013 22:01:39.858522  455618 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.787732217s
	I1013 22:01:40.371644  455618 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.301047006s
	I1013 22:01:42.072269  455618 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.001613753s
	I1013 22:01:42.084510  455618 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1013 22:01:42.094227  455618 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1013 22:01:42.103978  455618 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1013 22:01:42.104292  455618 kubeadm.go:318] [mark-control-plane] Marking the node no-preload-080337 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1013 22:01:42.112733  455618 kubeadm.go:318] [bootstrap-token] Using token: yfpg6o.hh4ip0s3gdfr3ndm
	I1013 22:01:42.114230  455618 out.go:252]   - Configuring RBAC rules ...
	I1013 22:01:42.114397  455618 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1013 22:01:42.117436  455618 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1013 22:01:42.122708  455618 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1013 22:01:42.125414  455618 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1013 22:01:42.128652  455618 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1013 22:01:42.130933  455618 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1013 22:01:42.478340  455618 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1013 22:01:42.895880  455618 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1013 22:01:43.478091  455618 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1013 22:01:43.478939  455618 kubeadm.go:318] 
	I1013 22:01:43.479069  455618 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1013 22:01:43.479090  455618 kubeadm.go:318] 
	I1013 22:01:43.479210  455618 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1013 22:01:43.479231  455618 kubeadm.go:318] 
	I1013 22:01:43.479267  455618 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1013 22:01:43.479352  455618 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1013 22:01:43.479439  455618 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1013 22:01:43.479448  455618 kubeadm.go:318] 
	I1013 22:01:43.479522  455618 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1013 22:01:43.479531  455618 kubeadm.go:318] 
	I1013 22:01:43.479602  455618 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1013 22:01:43.479614  455618 kubeadm.go:318] 
	I1013 22:01:43.479719  455618 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1013 22:01:43.479799  455618 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1013 22:01:43.479912  455618 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1013 22:01:43.479923  455618 kubeadm.go:318] 
	I1013 22:01:43.480058  455618 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1013 22:01:43.480169  455618 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1013 22:01:43.480186  455618 kubeadm.go:318] 
	I1013 22:01:43.480306  455618 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token yfpg6o.hh4ip0s3gdfr3ndm \
	I1013 22:01:43.480461  455618 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:00bf5d6d0f4ef7bd9334b23e90d8fd2b7e452995fe6a96d4fd7aebd6540a9956 \
	I1013 22:01:43.480494  455618 kubeadm.go:318] 	--control-plane 
	I1013 22:01:43.480503  455618 kubeadm.go:318] 
	I1013 22:01:43.480621  455618 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1013 22:01:43.480635  455618 kubeadm.go:318] 
	I1013 22:01:43.480762  455618 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token yfpg6o.hh4ip0s3gdfr3ndm \
	I1013 22:01:43.480920  455618 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:00bf5d6d0f4ef7bd9334b23e90d8fd2b7e452995fe6a96d4fd7aebd6540a9956 
	I1013 22:01:43.482786  455618 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1013 22:01:43.482937  455618 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1013 22:01:43.482960  455618 cni.go:84] Creating CNI manager for ""
	I1013 22:01:43.482973  455618 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 22:01:43.484833  455618 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1013 22:01:42.469181  410447 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1013 22:01:42.469628  410447 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1013 22:01:42.469694  410447 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1013 22:01:42.469752  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1013 22:01:42.501578  410447 cri.go:89] found id: "2bca0d4b76f96c86a214254a1d7f4cc89b2ae472dc6171bf4ead501afcd93b4a"
	I1013 22:01:42.501604  410447 cri.go:89] found id: ""
	I1013 22:01:42.501616  410447 logs.go:282] 1 containers: [2bca0d4b76f96c86a214254a1d7f4cc89b2ae472dc6171bf4ead501afcd93b4a]
	I1013 22:01:42.501680  410447 ssh_runner.go:195] Run: which crictl
	I1013 22:01:42.506058  410447 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1013 22:01:42.506131  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1013 22:01:42.538117  410447 cri.go:89] found id: ""
	I1013 22:01:42.538151  410447 logs.go:282] 0 containers: []
	W1013 22:01:42.538163  410447 logs.go:284] No container was found matching "etcd"
	I1013 22:01:42.538171  410447 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1013 22:01:42.538239  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1013 22:01:42.568036  410447 cri.go:89] found id: ""
	I1013 22:01:42.568070  410447 logs.go:282] 0 containers: []
	W1013 22:01:42.568082  410447 logs.go:284] No container was found matching "coredns"
	I1013 22:01:42.568090  410447 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1013 22:01:42.568158  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1013 22:01:42.598808  410447 cri.go:89] found id: "d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110"
	I1013 22:01:42.598832  410447 cri.go:89] found id: ""
	I1013 22:01:42.598841  410447 logs.go:282] 1 containers: [d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110]
	I1013 22:01:42.598893  410447 ssh_runner.go:195] Run: which crictl
	I1013 22:01:42.603152  410447 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1013 22:01:42.603223  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1013 22:01:42.632822  410447 cri.go:89] found id: ""
	I1013 22:01:42.632853  410447 logs.go:282] 0 containers: []
	W1013 22:01:42.632863  410447 logs.go:284] No container was found matching "kube-proxy"
	I1013 22:01:42.632870  410447 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1013 22:01:42.632920  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1013 22:01:42.666498  410447 cri.go:89] found id: "6bb08d6a13040e205988dc98d566c74f87d564ee6fc94f5c64022735afe6609d"
	I1013 22:01:42.666522  410447 cri.go:89] found id: ""
	I1013 22:01:42.666533  410447 logs.go:282] 1 containers: [6bb08d6a13040e205988dc98d566c74f87d564ee6fc94f5c64022735afe6609d]
	I1013 22:01:42.666606  410447 ssh_runner.go:195] Run: which crictl
	I1013 22:01:42.671188  410447 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1013 22:01:42.671261  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1013 22:01:42.707128  410447 cri.go:89] found id: ""
	I1013 22:01:42.707161  410447 logs.go:282] 0 containers: []
	W1013 22:01:42.707174  410447 logs.go:284] No container was found matching "kindnet"
	I1013 22:01:42.707181  410447 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1013 22:01:42.707241  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1013 22:01:42.744340  410447 cri.go:89] found id: ""
	I1013 22:01:42.744371  410447 logs.go:282] 0 containers: []
	W1013 22:01:42.744381  410447 logs.go:284] No container was found matching "storage-provisioner"
	I1013 22:01:42.744393  410447 logs.go:123] Gathering logs for kube-scheduler [d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110] ...
	I1013 22:01:42.744409  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110"
	I1013 22:01:42.799070  410447 logs.go:123] Gathering logs for kube-controller-manager [6bb08d6a13040e205988dc98d566c74f87d564ee6fc94f5c64022735afe6609d] ...
	I1013 22:01:42.799110  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6bb08d6a13040e205988dc98d566c74f87d564ee6fc94f5c64022735afe6609d"
	I1013 22:01:42.828314  410447 logs.go:123] Gathering logs for CRI-O ...
	I1013 22:01:42.828369  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1013 22:01:42.895041  410447 logs.go:123] Gathering logs for container status ...
	I1013 22:01:42.895089  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1013 22:01:42.930193  410447 logs.go:123] Gathering logs for kubelet ...
	I1013 22:01:42.930228  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1013 22:01:43.023521  410447 logs.go:123] Gathering logs for dmesg ...
	I1013 22:01:43.023563  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1013 22:01:43.039046  410447 logs.go:123] Gathering logs for describe nodes ...
	I1013 22:01:43.039075  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1013 22:01:43.097349  410447 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1013 22:01:43.097372  410447 logs.go:123] Gathering logs for kube-apiserver [2bca0d4b76f96c86a214254a1d7f4cc89b2ae472dc6171bf4ead501afcd93b4a] ...
	I1013 22:01:43.097388  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2bca0d4b76f96c86a214254a1d7f4cc89b2ae472dc6171bf4ead501afcd93b4a"
	I1013 22:01:45.632633  410447 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1013 22:01:45.633090  410447 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1013 22:01:45.633150  410447 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1013 22:01:45.633210  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1013 22:01:45.662984  410447 cri.go:89] found id: "2bca0d4b76f96c86a214254a1d7f4cc89b2ae472dc6171bf4ead501afcd93b4a"
	I1013 22:01:45.663023  410447 cri.go:89] found id: ""
	I1013 22:01:45.663034  410447 logs.go:282] 1 containers: [2bca0d4b76f96c86a214254a1d7f4cc89b2ae472dc6171bf4ead501afcd93b4a]
	I1013 22:01:45.663091  410447 ssh_runner.go:195] Run: which crictl
	I1013 22:01:45.667451  410447 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1013 22:01:45.667538  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1013 22:01:45.695080  410447 cri.go:89] found id: ""
	I1013 22:01:45.695108  410447 logs.go:282] 0 containers: []
	W1013 22:01:45.695119  410447 logs.go:284] No container was found matching "etcd"
	I1013 22:01:45.695126  410447 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1013 22:01:45.695190  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1013 22:01:45.723807  410447 cri.go:89] found id: ""
	I1013 22:01:45.723834  410447 logs.go:282] 0 containers: []
	W1013 22:01:45.723842  410447 logs.go:284] No container was found matching "coredns"
	I1013 22:01:45.723851  410447 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1013 22:01:45.723912  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1013 22:01:45.752160  410447 cri.go:89] found id: "d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110"
	I1013 22:01:45.752185  410447 cri.go:89] found id: ""
	I1013 22:01:45.752195  410447 logs.go:282] 1 containers: [d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110]
	I1013 22:01:45.752258  410447 ssh_runner.go:195] Run: which crictl
	I1013 22:01:45.756614  410447 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1013 22:01:45.756681  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1013 22:01:45.784293  410447 cri.go:89] found id: ""
	I1013 22:01:45.784319  410447 logs.go:282] 0 containers: []
	W1013 22:01:45.784330  410447 logs.go:284] No container was found matching "kube-proxy"
	I1013 22:01:45.784338  410447 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1013 22:01:45.784398  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1013 22:01:45.811501  410447 cri.go:89] found id: "6bb08d6a13040e205988dc98d566c74f87d564ee6fc94f5c64022735afe6609d"
	I1013 22:01:45.811528  410447 cri.go:89] found id: ""
	I1013 22:01:45.811539  410447 logs.go:282] 1 containers: [6bb08d6a13040e205988dc98d566c74f87d564ee6fc94f5c64022735afe6609d]
	I1013 22:01:45.811597  410447 ssh_runner.go:195] Run: which crictl
	I1013 22:01:45.816263  410447 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1013 22:01:45.816351  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1013 22:01:45.847188  410447 cri.go:89] found id: ""
	I1013 22:01:45.847220  410447 logs.go:282] 0 containers: []
	W1013 22:01:45.847231  410447 logs.go:284] No container was found matching "kindnet"
	I1013 22:01:45.847239  410447 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1013 22:01:45.847301  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1013 22:01:45.877555  410447 cri.go:89] found id: ""
	I1013 22:01:45.877582  410447 logs.go:282] 0 containers: []
	W1013 22:01:45.877593  410447 logs.go:284] No container was found matching "storage-provisioner"
	I1013 22:01:45.877605  410447 logs.go:123] Gathering logs for dmesg ...
	I1013 22:01:45.877621  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1013 22:01:45.893557  410447 logs.go:123] Gathering logs for describe nodes ...
	I1013 22:01:45.893591  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1013 22:01:45.949312  410447 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1013 22:01:45.949337  410447 logs.go:123] Gathering logs for kube-apiserver [2bca0d4b76f96c86a214254a1d7f4cc89b2ae472dc6171bf4ead501afcd93b4a] ...
	I1013 22:01:45.949352  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2bca0d4b76f96c86a214254a1d7f4cc89b2ae472dc6171bf4ead501afcd93b4a"
	I1013 22:01:45.983384  410447 logs.go:123] Gathering logs for kube-scheduler [d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110] ...
	I1013 22:01:45.983421  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110"
	I1013 22:01:46.035902  410447 logs.go:123] Gathering logs for kube-controller-manager [6bb08d6a13040e205988dc98d566c74f87d564ee6fc94f5c64022735afe6609d] ...
	I1013 22:01:46.035938  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6bb08d6a13040e205988dc98d566c74f87d564ee6fc94f5c64022735afe6609d"
	I1013 22:01:46.064876  410447 logs.go:123] Gathering logs for CRI-O ...
	I1013 22:01:46.064905  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1013 22:01:46.118950  410447 logs.go:123] Gathering logs for container status ...
	I1013 22:01:46.119004  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1013 22:01:46.152281  410447 logs.go:123] Gathering logs for kubelet ...
	I1013 22:01:46.152311  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1013 22:01:43.486189  455618 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1013 22:01:43.490740  455618 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1013 22:01:43.490758  455618 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1013 22:01:43.504268  455618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1013 22:01:43.711711  455618 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1013 22:01:43.711770  455618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:01:43.711824  455618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-080337 minikube.k8s.io/updated_at=2025_10_13T22_01_43_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=18273cc699fc357fbf4f93654efe4966698a9f22 minikube.k8s.io/name=no-preload-080337 minikube.k8s.io/primary=true
	I1013 22:01:43.798734  455618 ops.go:34] apiserver oom_adj: -16
	I1013 22:01:43.798790  455618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:01:44.299716  455618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:01:44.799789  455618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:01:45.299202  455618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:01:45.799048  455618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:01:46.299452  455618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:01:46.798953  455618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:01:47.298944  455618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:01:47.799098  455618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:01:48.299138  455618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:01:48.373726  455618 kubeadm.go:1113] duration metric: took 4.662016028s to wait for elevateKubeSystemPrivileges
	I1013 22:01:48.373764  455618 kubeadm.go:402] duration metric: took 15.075405475s to StartCluster
	I1013 22:01:48.373782  455618 settings.go:142] acquiring lock: {Name:mk13008e3b2fce0e368bddbf00d43b8340210d33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:01:48.373863  455618 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-226873/kubeconfig
	I1013 22:01:48.375252  455618 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/kubeconfig: {Name:mk2f336b13d09ff6e6da9e86905651541ce51ca0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:01:48.375481  455618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1013 22:01:48.375501  455618 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1013 22:01:48.375591  455618 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1013 22:01:48.375689  455618 addons.go:69] Setting storage-provisioner=true in profile "no-preload-080337"
	I1013 22:01:48.375712  455618 config.go:182] Loaded profile config "no-preload-080337": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:01:48.375712  455618 addons.go:69] Setting default-storageclass=true in profile "no-preload-080337"
	I1013 22:01:48.375739  455618 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-080337"
	I1013 22:01:48.375717  455618 addons.go:238] Setting addon storage-provisioner=true in "no-preload-080337"
	I1013 22:01:48.375829  455618 host.go:66] Checking if "no-preload-080337" exists ...
	I1013 22:01:48.376174  455618 cli_runner.go:164] Run: docker container inspect no-preload-080337 --format={{.State.Status}}
	I1013 22:01:48.376367  455618 cli_runner.go:164] Run: docker container inspect no-preload-080337 --format={{.State.Status}}
	I1013 22:01:48.377641  455618 out.go:179] * Verifying Kubernetes components...
	I1013 22:01:48.378864  455618 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:01:48.401855  455618 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1013 22:01:48.402054  455618 addons.go:238] Setting addon default-storageclass=true in "no-preload-080337"
	I1013 22:01:48.402100  455618 host.go:66] Checking if "no-preload-080337" exists ...
	I1013 22:01:48.402534  455618 cli_runner.go:164] Run: docker container inspect no-preload-080337 --format={{.State.Status}}
	I1013 22:01:48.403447  455618 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 22:01:48.403477  455618 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1013 22:01:48.403538  455618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-080337
	I1013 22:01:48.430042  455618 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/no-preload-080337/id_rsa Username:docker}
	I1013 22:01:48.432202  455618 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1013 22:01:48.432227  455618 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1013 22:01:48.432286  455618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-080337
	I1013 22:01:48.458551  455618 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/no-preload-080337/id_rsa Username:docker}
	I1013 22:01:48.484922  455618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1013 22:01:48.547177  455618 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 22:01:48.568812  455618 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 22:01:48.580188  455618 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1013 22:01:48.717481  455618 start.go:976] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1013 22:01:48.718409  455618 node_ready.go:35] waiting up to 6m0s for node "no-preload-080337" to be "Ready" ...
	I1013 22:01:48.943986  455618 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	
	
	==> CRI-O <==
	Oct 13 22:01:36 old-k8s-version-534822 crio[772]: time="2025-10-13T22:01:36.509033249Z" level=info msg="Starting container: 46f90f4d7a263c25b12cd2b44236c2fddb1be728b7d29efdf03f192817a1f46f" id=caaf4ba3-6b50-4dda-89eb-34e562e820ab name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 22:01:36 old-k8s-version-534822 crio[772]: time="2025-10-13T22:01:36.511144372Z" level=info msg="Started container" PID=2143 containerID=46f90f4d7a263c25b12cd2b44236c2fddb1be728b7d29efdf03f192817a1f46f description=kube-system/coredns-5dd5756b68-wx29h/coredns id=caaf4ba3-6b50-4dda-89eb-34e562e820ab name=/runtime.v1.RuntimeService/StartContainer sandboxID=59508d8995d57d6a7813a9709932d473440795b233e54baf2f02e02ef7f7fca2
	Oct 13 22:01:39 old-k8s-version-534822 crio[772]: time="2025-10-13T22:01:39.381455635Z" level=info msg="Running pod sandbox: default/busybox/POD" id=f12b2129-4fab-400f-aed0-5b8ed507e83f name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 13 22:01:39 old-k8s-version-534822 crio[772]: time="2025-10-13T22:01:39.381591168Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:01:39 old-k8s-version-534822 crio[772]: time="2025-10-13T22:01:39.387919369Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:cc5e3f8c4ee5e822e8d9ca0854053ad1cb753f41114a1b47f1e7f43e4761bb29 UID:26402fa0-a911-42a3-ad38-62ca0dd617e3 NetNS:/var/run/netns/688427b6-139d-464b-8f8f-48409067ddfc Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000c82ae8}] Aliases:map[]}"
	Oct 13 22:01:39 old-k8s-version-534822 crio[772]: time="2025-10-13T22:01:39.38796595Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 13 22:01:39 old-k8s-version-534822 crio[772]: time="2025-10-13T22:01:39.402660234Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:cc5e3f8c4ee5e822e8d9ca0854053ad1cb753f41114a1b47f1e7f43e4761bb29 UID:26402fa0-a911-42a3-ad38-62ca0dd617e3 NetNS:/var/run/netns/688427b6-139d-464b-8f8f-48409067ddfc Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000c82ae8}] Aliases:map[]}"
	Oct 13 22:01:39 old-k8s-version-534822 crio[772]: time="2025-10-13T22:01:39.402873741Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 13 22:01:39 old-k8s-version-534822 crio[772]: time="2025-10-13T22:01:39.403960011Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 13 22:01:39 old-k8s-version-534822 crio[772]: time="2025-10-13T22:01:39.405252274Z" level=info msg="Ran pod sandbox cc5e3f8c4ee5e822e8d9ca0854053ad1cb753f41114a1b47f1e7f43e4761bb29 with infra container: default/busybox/POD" id=f12b2129-4fab-400f-aed0-5b8ed507e83f name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 13 22:01:39 old-k8s-version-534822 crio[772]: time="2025-10-13T22:01:39.406595244Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=501a6eff-1080-48f1-a9f4-fe482da50d49 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:01:39 old-k8s-version-534822 crio[772]: time="2025-10-13T22:01:39.406857993Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=501a6eff-1080-48f1-a9f4-fe482da50d49 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:01:39 old-k8s-version-534822 crio[772]: time="2025-10-13T22:01:39.406910087Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=501a6eff-1080-48f1-a9f4-fe482da50d49 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:01:39 old-k8s-version-534822 crio[772]: time="2025-10-13T22:01:39.407587257Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b9016dec-446d-4e28-90cd-42fc3babc5d0 name=/runtime.v1.ImageService/PullImage
	Oct 13 22:01:39 old-k8s-version-534822 crio[772]: time="2025-10-13T22:01:39.409457015Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 13 22:01:40 old-k8s-version-534822 crio[772]: time="2025-10-13T22:01:40.145844481Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=b9016dec-446d-4e28-90cd-42fc3babc5d0 name=/runtime.v1.ImageService/PullImage
	Oct 13 22:01:40 old-k8s-version-534822 crio[772]: time="2025-10-13T22:01:40.146608266Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f67e403d-bf8b-41c0-aa7b-d0c51e6979f6 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:01:40 old-k8s-version-534822 crio[772]: time="2025-10-13T22:01:40.148075149Z" level=info msg="Creating container: default/busybox/busybox" id=fdcfb9ce-defa-4965-9d70-1f003ae1d0fa name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:01:40 old-k8s-version-534822 crio[772]: time="2025-10-13T22:01:40.149104805Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:01:40 old-k8s-version-534822 crio[772]: time="2025-10-13T22:01:40.153937611Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:01:40 old-k8s-version-534822 crio[772]: time="2025-10-13T22:01:40.154501493Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:01:40 old-k8s-version-534822 crio[772]: time="2025-10-13T22:01:40.184067982Z" level=info msg="Created container e2d023d4e2c1dc94ab951c0a95c45db9f300e8c9e0ea35848296de4d69b8d274: default/busybox/busybox" id=fdcfb9ce-defa-4965-9d70-1f003ae1d0fa name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:01:40 old-k8s-version-534822 crio[772]: time="2025-10-13T22:01:40.184718928Z" level=info msg="Starting container: e2d023d4e2c1dc94ab951c0a95c45db9f300e8c9e0ea35848296de4d69b8d274" id=d2b6175d-963e-413b-97fb-5fd7f66d62b0 name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 22:01:40 old-k8s-version-534822 crio[772]: time="2025-10-13T22:01:40.186944907Z" level=info msg="Started container" PID=2220 containerID=e2d023d4e2c1dc94ab951c0a95c45db9f300e8c9e0ea35848296de4d69b8d274 description=default/busybox/busybox id=d2b6175d-963e-413b-97fb-5fd7f66d62b0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=cc5e3f8c4ee5e822e8d9ca0854053ad1cb753f41114a1b47f1e7f43e4761bb29
	Oct 13 22:01:48 old-k8s-version-534822 crio[772]: time="2025-10-13T22:01:48.175336824Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	e2d023d4e2c1d       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   9 seconds ago       Running             busybox                   0                   cc5e3f8c4ee5e       busybox                                          default
	46f90f4d7a263       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      13 seconds ago      Running             coredns                   0                   59508d8995d57       coredns-5dd5756b68-wx29h                         kube-system
	95071c3cc5c62       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 seconds ago      Running             storage-provisioner       0                   8fdd38af6a984       storage-provisioner                              kube-system
	309ef5a2f3fb9       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    24 seconds ago      Running             kindnet-cni               0                   e48796940b9f3       kindnet-snc6w                                    kube-system
	0f55fd1a38827       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                      26 seconds ago      Running             kube-proxy                0                   22307fbd10f23       kube-proxy-dvt68                                 kube-system
	4116abd0ddd10       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                      44 seconds ago      Running             kube-controller-manager   0                   81bf1ce07ef00       kube-controller-manager-old-k8s-version-534822   kube-system
	b07e3f37b4af3       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                      44 seconds ago      Running             kube-scheduler            0                   df64fe25678a0       kube-scheduler-old-k8s-version-534822            kube-system
	370be0cb67cb8       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      44 seconds ago      Running             etcd                      0                   1cf4fa9ed86d4       etcd-old-k8s-version-534822                      kube-system
	b6c42d5f9284a       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                      44 seconds ago      Running             kube-apiserver            0                   cc69e0f1e42be       kube-apiserver-old-k8s-version-534822            kube-system
	
	
	==> coredns [46f90f4d7a263c25b12cd2b44236c2fddb1be728b7d29efdf03f192817a1f46f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 25cf5af2951e282c4b0e961a02fb5d3e57c974501832fee92eec17b5135b9ec9d9e87d2ac94e6d117a5ed3dd54e8800aa7b4479706eb54497145ccdb80397d1b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:45473 - 10245 "HINFO IN 7865241381009559048.7244466813428948603. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.498980127s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-534822
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-534822
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=18273cc699fc357fbf4f93654efe4966698a9f22
	                    minikube.k8s.io/name=old-k8s-version-534822
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_13T22_01_10_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 13 Oct 2025 22:01:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-534822
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 13 Oct 2025 22:01:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 13 Oct 2025 22:01:40 +0000   Mon, 13 Oct 2025 22:01:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 13 Oct 2025 22:01:40 +0000   Mon, 13 Oct 2025 22:01:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 13 Oct 2025 22:01:40 +0000   Mon, 13 Oct 2025 22:01:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 13 Oct 2025 22:01:40 +0000   Mon, 13 Oct 2025 22:01:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    old-k8s-version-534822
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863444Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863444Ki
	  pods:               110
	System Info:
	  Machine ID:                 66f4c9907daa2bafbf78246f68ed04ba
	  System UUID:                6dba6f53-90ba-4da3-b3ef-d819199a3aeb
	  Boot ID:                    981b04c4-08c1-4321-af44-0fa889f5f1d8
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-5dd5756b68-wx29h                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     26s
	  kube-system                 etcd-old-k8s-version-534822                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         39s
	  kube-system                 kindnet-snc6w                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-old-k8s-version-534822             250m (3%)     0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 kube-controller-manager-old-k8s-version-534822    200m (2%)     0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 kube-proxy-dvt68                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-old-k8s-version-534822             100m (1%)     0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 26s                kube-proxy       
	  Normal  Starting                 45s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  45s (x8 over 45s)  kubelet          Node old-k8s-version-534822 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    45s (x8 over 45s)  kubelet          Node old-k8s-version-534822 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     45s (x8 over 45s)  kubelet          Node old-k8s-version-534822 status is now: NodeHasSufficientPID
	  Normal  Starting                 39s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  39s                kubelet          Node old-k8s-version-534822 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    39s                kubelet          Node old-k8s-version-534822 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     39s                kubelet          Node old-k8s-version-534822 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           27s                node-controller  Node old-k8s-version-534822 event: Registered Node old-k8s-version-534822 in Controller
	  Normal  NodeReady                13s                kubelet          Node old-k8s-version-534822 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.099627] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.027042] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.612816] kauditd_printk_skb: 47 callbacks suppressed
	[Oct13 21:21] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +1.026013] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +1.023870] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +1.023931] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +1.023844] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +1.023883] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +2.047734] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +4.031606] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +8.511203] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[ +16.382178] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[Oct13 21:22] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	
	
	==> etcd [370be0cb67cb8cabef56628993086420949a6ebcd6f3774321605b6725e6ae86] <==
	{"level":"info","ts":"2025-10-13T22:01:05.177071Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-13T22:01:05.463273Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 is starting a new election at term 1"}
	{"level":"info","ts":"2025-10-13T22:01:05.463372Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-10-13T22:01:05.463414Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 1"}
	{"level":"info","ts":"2025-10-13T22:01:05.463447Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became candidate at term 2"}
	{"level":"info","ts":"2025-10-13T22:01:05.463465Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-10-13T22:01:05.463508Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became leader at term 2"}
	{"level":"info","ts":"2025-10-13T22:01:05.463529Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-10-13T22:01:05.464338Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-13T22:01:05.465026Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:old-k8s-version-534822 ClientURLs:[https://192.168.103.2:2379]}","request-path":"/0/members/f23060b075c4c089/attributes","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-13T22:01:05.46512Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-13T22:01:05.465197Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-13T22:01:05.465225Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-13T22:01:05.465301Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-13T22:01:05.465323Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-13T22:01:05.465356Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-13T22:01:05.465369Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-13T22:01:05.466917Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-13T22:01:05.46802Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	{"level":"info","ts":"2025-10-13T22:01:23.554233Z","caller":"traceutil/trace.go:171","msg":"trace[934997839] transaction","detail":"{read_only:false; response_revision:376; number_of_response:1; }","duration":"102.915008ms","start":"2025-10-13T22:01:23.451289Z","end":"2025-10-13T22:01:23.554204Z","steps":["trace[934997839] 'process raft request'  (duration: 58.089841ms)","trace[934997839] 'compare'  (duration: 44.597661ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-13T22:01:23.809505Z","caller":"traceutil/trace.go:171","msg":"trace[1146289007] transaction","detail":"{read_only:false; response_revision:384; number_of_response:1; }","duration":"102.726336ms","start":"2025-10-13T22:01:23.706751Z","end":"2025-10-13T22:01:23.809477Z","steps":["trace[1146289007] 'process raft request'  (duration: 90.565621ms)","trace[1146289007] 'compare'  (duration: 11.990015ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-13T22:01:23.809733Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.394834ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2025-10-13T22:01:23.80981Z","caller":"traceutil/trace.go:171","msg":"trace[395452803] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:386; }","duration":"102.486085ms","start":"2025-10-13T22:01:23.707313Z","end":"2025-10-13T22:01:23.809799Z","steps":["trace[395452803] 'agreement among raft nodes before linearized reading'  (duration: 102.364534ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-13T22:01:23.809765Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.565558ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-old-k8s-version-534822\" ","response":"range_response_count:1 size:7467"}
	{"level":"info","ts":"2025-10-13T22:01:23.809859Z","caller":"traceutil/trace.go:171","msg":"trace[464748311] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-old-k8s-version-534822; range_end:; response_count:1; response_revision:386; }","duration":"102.662734ms","start":"2025-10-13T22:01:23.707177Z","end":"2025-10-13T22:01:23.80984Z","steps":["trace[464748311] 'agreement among raft nodes before linearized reading'  (duration: 102.530629ms)"],"step_count":1}
	
	
	==> kernel <==
	 22:01:49 up  1:44,  0 user,  load average: 5.22, 3.47, 6.18
	Linux old-k8s-version-534822 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [309ef5a2f3fb9339db21df946eaa7efa2e8dcad2e4f3ca4730fc63e62a95b139] <==
	I1013 22:01:25.596124       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1013 22:01:25.596423       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1013 22:01:25.596598       1 main.go:148] setting mtu 1500 for CNI 
	I1013 22:01:25.596616       1 main.go:178] kindnetd IP family: "ipv4"
	I1013 22:01:25.596644       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-13T22:01:25Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1013 22:01:25.799926       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1013 22:01:25.891894       1 controller.go:381] "Waiting for informer caches to sync"
	I1013 22:01:25.891954       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1013 22:01:25.991668       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1013 22:01:26.292708       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1013 22:01:26.292737       1 metrics.go:72] Registering metrics
	I1013 22:01:26.292793       1 controller.go:711] "Syncing nftables rules"
	I1013 22:01:35.808107       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1013 22:01:35.808146       1 main.go:301] handling current node
	I1013 22:01:45.803104       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1013 22:01:45.803150       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b6c42d5f9284af680b25bc97d15abcebf852a5dccace5f5a006dc48f95ed356d] <==
	I1013 22:01:06.882950       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1013 22:01:06.883307       1 shared_informer.go:318] Caches are synced for configmaps
	I1013 22:01:06.883670       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1013 22:01:06.883821       1 controller.go:624] quota admission added evaluator for: namespaces
	I1013 22:01:06.885262       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1013 22:01:06.885328       1 aggregator.go:166] initial CRD sync complete...
	I1013 22:01:06.885343       1 autoregister_controller.go:141] Starting autoregister controller
	I1013 22:01:06.885350       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1013 22:01:06.885358       1 cache.go:39] Caches are synced for autoregister controller
	I1013 22:01:06.931341       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1013 22:01:07.787314       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1013 22:01:07.792486       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1013 22:01:07.792523       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1013 22:01:08.235727       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1013 22:01:08.270161       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1013 22:01:08.393615       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1013 22:01:08.401608       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1013 22:01:08.402890       1 controller.go:624] quota admission added evaluator for: endpoints
	I1013 22:01:08.408076       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1013 22:01:09.195123       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1013 22:01:09.951538       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1013 22:01:09.962658       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1013 22:01:09.972504       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1013 22:01:22.700456       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1013 22:01:22.752359       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [4116abd0ddd10a814742a86b175bf67a4d043147a0c87302c248cee5badd95c7] <==
	I1013 22:01:22.195856       1 shared_informer.go:318] Caches are synced for resource quota
	I1013 22:01:22.283782       1 shared_informer.go:318] Caches are synced for attach detach
	I1013 22:01:22.630775       1 shared_informer.go:318] Caches are synced for garbage collector
	I1013 22:01:22.641944       1 shared_informer.go:318] Caches are synced for garbage collector
	I1013 22:01:22.641985       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1013 22:01:22.710293       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-snc6w"
	I1013 22:01:22.712285       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-dvt68"
	I1013 22:01:22.757590       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1013 22:01:23.004893       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-j56j2"
	I1013 22:01:23.014393       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-wx29h"
	I1013 22:01:23.021777       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="264.354778ms"
	I1013 22:01:23.036350       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="14.441262ms"
	I1013 22:01:23.036459       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="71.266µs"
	I1013 22:01:23.778771       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1013 22:01:23.826447       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-j56j2"
	I1013 22:01:23.834058       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="55.375985ms"
	I1013 22:01:23.841477       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="7.36199ms"
	I1013 22:01:23.841597       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="77.437µs"
	I1013 22:01:23.841690       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="48.205µs"
	I1013 22:01:36.149234       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="72.471µs"
	I1013 22:01:36.162183       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="80.002µs"
	I1013 22:01:37.096696       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1013 22:01:37.133068       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="130.966µs"
	I1013 22:01:37.151372       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="7.117251ms"
	I1013 22:01:37.151477       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="61.264µs"
	
	
	==> kube-proxy [0f55fd1a388272cc22bbf36a7bab6626f4b77b7fc5d685a92ef2bfa149503e8c] <==
	I1013 22:01:23.688081       1 server_others.go:69] "Using iptables proxy"
	I1013 22:01:23.779136       1 node.go:141] Successfully retrieved node IP: 192.168.103.2
	I1013 22:01:23.826358       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1013 22:01:23.829578       1 server_others.go:152] "Using iptables Proxier"
	I1013 22:01:23.829627       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1013 22:01:23.829637       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1013 22:01:23.829697       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1013 22:01:23.830085       1 server.go:846] "Version info" version="v1.28.0"
	I1013 22:01:23.830109       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 22:01:23.835467       1 config.go:97] "Starting endpoint slice config controller"
	I1013 22:01:23.836644       1 config.go:315] "Starting node config controller"
	I1013 22:01:23.836663       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1013 22:01:23.836049       1 config.go:188] "Starting service config controller"
	I1013 22:01:23.836327       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1013 22:01:23.836916       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1013 22:01:23.936980       1 shared_informer.go:318] Caches are synced for node config
	I1013 22:01:23.940176       1 shared_informer.go:318] Caches are synced for service config
	I1013 22:01:23.940200       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [b07e3f37b4af33c874faa343c35567575569fcac1ab57fb72fbd3ae048e27ab9] <==
	W1013 22:01:06.851707       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1013 22:01:06.852071       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1013 22:01:06.851778       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1013 22:01:06.852087       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1013 22:01:07.734798       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1013 22:01:07.734838       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1013 22:01:07.748757       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1013 22:01:07.749066       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1013 22:01:07.809809       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1013 22:01:07.809851       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1013 22:01:07.810509       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1013 22:01:07.810594       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1013 22:01:07.868083       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1013 22:01:07.868118       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1013 22:01:07.959052       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1013 22:01:07.959096       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1013 22:01:07.967793       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1013 22:01:07.967830       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1013 22:01:07.986555       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1013 22:01:07.986587       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1013 22:01:08.002208       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1013 22:01:08.002243       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1013 22:01:08.055005       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1013 22:01:08.055042       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1013 22:01:09.746620       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 13 22:01:22 old-k8s-version-534822 kubelet[1384]: I1013 22:01:22.115440    1384 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 13 22:01:22 old-k8s-version-534822 kubelet[1384]: I1013 22:01:22.716608    1384 topology_manager.go:215] "Topology Admit Handler" podUID="22c86c71-69cc-4b6a-b850-b737d719fd82" podNamespace="kube-system" podName="kindnet-snc6w"
	Oct 13 22:01:22 old-k8s-version-534822 kubelet[1384]: I1013 22:01:22.721448    1384 topology_manager.go:215] "Topology Admit Handler" podUID="be84538f-9c85-4223-8e1f-c017d85bf13a" podNamespace="kube-system" podName="kube-proxy-dvt68"
	Oct 13 22:01:22 old-k8s-version-534822 kubelet[1384]: I1013 22:01:22.913666    1384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8vpw\" (UniqueName: \"kubernetes.io/projected/22c86c71-69cc-4b6a-b850-b737d719fd82-kube-api-access-s8vpw\") pod \"kindnet-snc6w\" (UID: \"22c86c71-69cc-4b6a-b850-b737d719fd82\") " pod="kube-system/kindnet-snc6w"
	Oct 13 22:01:22 old-k8s-version-534822 kubelet[1384]: I1013 22:01:22.913740    1384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/be84538f-9c85-4223-8e1f-c017d85bf13a-lib-modules\") pod \"kube-proxy-dvt68\" (UID: \"be84538f-9c85-4223-8e1f-c017d85bf13a\") " pod="kube-system/kube-proxy-dvt68"
	Oct 13 22:01:22 old-k8s-version-534822 kubelet[1384]: I1013 22:01:22.913766    1384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/22c86c71-69cc-4b6a-b850-b737d719fd82-cni-cfg\") pod \"kindnet-snc6w\" (UID: \"22c86c71-69cc-4b6a-b850-b737d719fd82\") " pod="kube-system/kindnet-snc6w"
	Oct 13 22:01:22 old-k8s-version-534822 kubelet[1384]: I1013 22:01:22.913788    1384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/22c86c71-69cc-4b6a-b850-b737d719fd82-xtables-lock\") pod \"kindnet-snc6w\" (UID: \"22c86c71-69cc-4b6a-b850-b737d719fd82\") " pod="kube-system/kindnet-snc6w"
	Oct 13 22:01:22 old-k8s-version-534822 kubelet[1384]: I1013 22:01:22.913876    1384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/22c86c71-69cc-4b6a-b850-b737d719fd82-lib-modules\") pod \"kindnet-snc6w\" (UID: \"22c86c71-69cc-4b6a-b850-b737d719fd82\") " pod="kube-system/kindnet-snc6w"
	Oct 13 22:01:22 old-k8s-version-534822 kubelet[1384]: I1013 22:01:22.913950    1384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/be84538f-9c85-4223-8e1f-c017d85bf13a-kube-proxy\") pod \"kube-proxy-dvt68\" (UID: \"be84538f-9c85-4223-8e1f-c017d85bf13a\") " pod="kube-system/kube-proxy-dvt68"
	Oct 13 22:01:22 old-k8s-version-534822 kubelet[1384]: I1013 22:01:22.914032    1384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/be84538f-9c85-4223-8e1f-c017d85bf13a-xtables-lock\") pod \"kube-proxy-dvt68\" (UID: \"be84538f-9c85-4223-8e1f-c017d85bf13a\") " pod="kube-system/kube-proxy-dvt68"
	Oct 13 22:01:22 old-k8s-version-534822 kubelet[1384]: I1013 22:01:22.914086    1384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bv2w2\" (UniqueName: \"kubernetes.io/projected/be84538f-9c85-4223-8e1f-c017d85bf13a-kube-api-access-bv2w2\") pod \"kube-proxy-dvt68\" (UID: \"be84538f-9c85-4223-8e1f-c017d85bf13a\") " pod="kube-system/kube-proxy-dvt68"
	Oct 13 22:01:24 old-k8s-version-534822 kubelet[1384]: I1013 22:01:24.103338    1384 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-dvt68" podStartSLOduration=2.10328433 podCreationTimestamp="2025-10-13 22:01:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 22:01:24.103190805 +0000 UTC m=+14.176469792" watchObservedRunningTime="2025-10-13 22:01:24.10328433 +0000 UTC m=+14.176563318"
	Oct 13 22:01:36 old-k8s-version-534822 kubelet[1384]: I1013 22:01:36.120007    1384 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Oct 13 22:01:36 old-k8s-version-534822 kubelet[1384]: I1013 22:01:36.149215    1384 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-snc6w" podStartSLOduration=12.149154678 podCreationTimestamp="2025-10-13 22:01:22 +0000 UTC" firstStartedPulling="2025-10-13 22:01:23.390034269 +0000 UTC m=+13.463313238" lastFinishedPulling="2025-10-13 22:01:25.390043266 +0000 UTC m=+15.463322244" observedRunningTime="2025-10-13 22:01:26.110014548 +0000 UTC m=+16.183293527" watchObservedRunningTime="2025-10-13 22:01:36.149163684 +0000 UTC m=+26.222442671"
	Oct 13 22:01:36 old-k8s-version-534822 kubelet[1384]: I1013 22:01:36.149934    1384 topology_manager.go:215] "Topology Admit Handler" podUID="782e61d5-3652-4825-815d-3cbbe7a1e5f8" podNamespace="kube-system" podName="coredns-5dd5756b68-wx29h"
	Oct 13 22:01:36 old-k8s-version-534822 kubelet[1384]: I1013 22:01:36.150240    1384 topology_manager.go:215] "Topology Admit Handler" podUID="25d4b2c1-7e52-4aa4-8812-c88200601898" podNamespace="kube-system" podName="storage-provisioner"
	Oct 13 22:01:36 old-k8s-version-534822 kubelet[1384]: I1013 22:01:36.319022    1384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/782e61d5-3652-4825-815d-3cbbe7a1e5f8-config-volume\") pod \"coredns-5dd5756b68-wx29h\" (UID: \"782e61d5-3652-4825-815d-3cbbe7a1e5f8\") " pod="kube-system/coredns-5dd5756b68-wx29h"
	Oct 13 22:01:36 old-k8s-version-534822 kubelet[1384]: I1013 22:01:36.319110    1384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7v6f\" (UniqueName: \"kubernetes.io/projected/782e61d5-3652-4825-815d-3cbbe7a1e5f8-kube-api-access-j7v6f\") pod \"coredns-5dd5756b68-wx29h\" (UID: \"782e61d5-3652-4825-815d-3cbbe7a1e5f8\") " pod="kube-system/coredns-5dd5756b68-wx29h"
	Oct 13 22:01:36 old-k8s-version-534822 kubelet[1384]: I1013 22:01:36.319197    1384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/25d4b2c1-7e52-4aa4-8812-c88200601898-tmp\") pod \"storage-provisioner\" (UID: \"25d4b2c1-7e52-4aa4-8812-c88200601898\") " pod="kube-system/storage-provisioner"
	Oct 13 22:01:36 old-k8s-version-534822 kubelet[1384]: I1013 22:01:36.319228    1384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xh7q\" (UniqueName: \"kubernetes.io/projected/25d4b2c1-7e52-4aa4-8812-c88200601898-kube-api-access-9xh7q\") pod \"storage-provisioner\" (UID: \"25d4b2c1-7e52-4aa4-8812-c88200601898\") " pod="kube-system/storage-provisioner"
	Oct 13 22:01:37 old-k8s-version-534822 kubelet[1384]: I1013 22:01:37.132838    1384 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-wx29h" podStartSLOduration=14.13278996 podCreationTimestamp="2025-10-13 22:01:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 22:01:37.132724282 +0000 UTC m=+27.206003269" watchObservedRunningTime="2025-10-13 22:01:37.13278996 +0000 UTC m=+27.206068950"
	Oct 13 22:01:39 old-k8s-version-534822 kubelet[1384]: I1013 22:01:39.078687    1384 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=16.078632163 podCreationTimestamp="2025-10-13 22:01:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 22:01:37.153321512 +0000 UTC m=+27.226600498" watchObservedRunningTime="2025-10-13 22:01:39.078632163 +0000 UTC m=+29.151911151"
	Oct 13 22:01:39 old-k8s-version-534822 kubelet[1384]: I1013 22:01:39.079812    1384 topology_manager.go:215] "Topology Admit Handler" podUID="26402fa0-a911-42a3-ad38-62ca0dd617e3" podNamespace="default" podName="busybox"
	Oct 13 22:01:39 old-k8s-version-534822 kubelet[1384]: I1013 22:01:39.232154    1384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrvnv\" (UniqueName: \"kubernetes.io/projected/26402fa0-a911-42a3-ad38-62ca0dd617e3-kube-api-access-vrvnv\") pod \"busybox\" (UID: \"26402fa0-a911-42a3-ad38-62ca0dd617e3\") " pod="default/busybox"
	Oct 13 22:01:41 old-k8s-version-534822 kubelet[1384]: I1013 22:01:41.141367    1384 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.4023492260000001 podCreationTimestamp="2025-10-13 22:01:39 +0000 UTC" firstStartedPulling="2025-10-13 22:01:39.407171785 +0000 UTC m=+29.480450756" lastFinishedPulling="2025-10-13 22:01:40.146143521 +0000 UTC m=+30.219422490" observedRunningTime="2025-10-13 22:01:41.140943006 +0000 UTC m=+31.214222017" watchObservedRunningTime="2025-10-13 22:01:41.14132096 +0000 UTC m=+31.214599946"
	
	
	==> storage-provisioner [95071c3cc5c629aa682aaff934d3f97511cc258bf1f9e4444079bbe050d37580] <==
	I1013 22:01:36.519180       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1013 22:01:36.528865       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1013 22:01:36.528936       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1013 22:01:36.537819       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1013 22:01:36.538126       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-534822_92dce240-039e-41fa-85e9-9cdd4984a126!
	I1013 22:01:36.538133       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b441445e-9dde-4324-afa9-3eced6881d1d", APIVersion:"v1", ResourceVersion:"430", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-534822_92dce240-039e-41fa-85e9-9cdd4984a126 became leader
	I1013 22:01:36.639033       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-534822_92dce240-039e-41fa-85e9-9cdd4984a126!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-534822 -n old-k8s-version-534822
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-534822 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.59s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-080337 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-080337 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (243.917516ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:02:11Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p no-preload-080337 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-080337 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-080337 describe deploy/metrics-server -n kube-system: exit status 1 (60.90312ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-080337 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-080337
helpers_test.go:243: (dbg) docker inspect no-preload-080337:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "582c4b9df6d8b6770ac76b0dd560241ab3fa5f134c4e1b0fc0ec2cd0d08c34c8",
	        "Created": "2025-10-13T22:01:13.425171095Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 456161,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-13T22:01:13.470678413Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:496f866fde942b6f85dc3d23428f27ed11a5cd69b522133c43b8dc97e7575c9e",
	        "ResolvConfPath": "/var/lib/docker/containers/582c4b9df6d8b6770ac76b0dd560241ab3fa5f134c4e1b0fc0ec2cd0d08c34c8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/582c4b9df6d8b6770ac76b0dd560241ab3fa5f134c4e1b0fc0ec2cd0d08c34c8/hostname",
	        "HostsPath": "/var/lib/docker/containers/582c4b9df6d8b6770ac76b0dd560241ab3fa5f134c4e1b0fc0ec2cd0d08c34c8/hosts",
	        "LogPath": "/var/lib/docker/containers/582c4b9df6d8b6770ac76b0dd560241ab3fa5f134c4e1b0fc0ec2cd0d08c34c8/582c4b9df6d8b6770ac76b0dd560241ab3fa5f134c4e1b0fc0ec2cd0d08c34c8-json.log",
	        "Name": "/no-preload-080337",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-080337:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-080337",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "582c4b9df6d8b6770ac76b0dd560241ab3fa5f134c4e1b0fc0ec2cd0d08c34c8",
	                "LowerDir": "/var/lib/docker/overlay2/c471c6160b15e3a21754875e4401849c13d42534f05e08f0d4d88218c5c26bf7-init/diff:/var/lib/docker/overlay2/d6236b573fee274727d414fe7dfb0718c2f1a4b8ebed995b4196b3231a8d31a4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c471c6160b15e3a21754875e4401849c13d42534f05e08f0d4d88218c5c26bf7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c471c6160b15e3a21754875e4401849c13d42534f05e08f0d4d88218c5c26bf7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c471c6160b15e3a21754875e4401849c13d42534f05e08f0d4d88218c5c26bf7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-080337",
	                "Source": "/var/lib/docker/volumes/no-preload-080337/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-080337",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-080337",
	                "name.minikube.sigs.k8s.io": "no-preload-080337",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "719e5044ec977f74de6ec3eb4017b8b3955bacd6b16c8c3a1be5d15682bb1519",
	            "SandboxKey": "/var/run/docker/netns/719e5044ec97",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33058"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33059"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33062"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33060"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33061"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-080337": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b6:f1:62:12:38:d7",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "023fbfd0e79f229835d49fb4d5f52967eb961e42ade48e5f1189467342508af0",
	                    "EndpointID": "692b965fc186656a5a18d491dfe9f832a7b063990e378e8ba0b988283a37a245",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-080337",
	                        "582c4b9df6d8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-080337 -n no-preload-080337
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-080337 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-080337 logs -n 25: (1.030643915s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────────
───┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────────
───┤
	│ ssh     │ -p cilium-200102 sudo cat /etc/docker/daemon.json                                                                                                                                                                                             │ cilium-200102            │ jenkins │ v1.37.0 │ 13 Oct 25 22:00 UTC │                     │
	│ ssh     │ -p cilium-200102 sudo docker system info                                                                                                                                                                                                      │ cilium-200102            │ jenkins │ v1.37.0 │ 13 Oct 25 22:00 UTC │                     │
	│ ssh     │ -p cilium-200102 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                     │ cilium-200102            │ jenkins │ v1.37.0 │ 13 Oct 25 22:00 UTC │                     │
	│ ssh     │ -p cilium-200102 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                     │ cilium-200102            │ jenkins │ v1.37.0 │ 13 Oct 25 22:00 UTC │                     │
	│ ssh     │ -p cilium-200102 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ cilium-200102            │ jenkins │ v1.37.0 │ 13 Oct 25 22:00 UTC │                     │
	│ ssh     │ -p cilium-200102 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ cilium-200102            │ jenkins │ v1.37.0 │ 13 Oct 25 22:00 UTC │                     │
	│ ssh     │ -p cilium-200102 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-200102            │ jenkins │ v1.37.0 │ 13 Oct 25 22:00 UTC │                     │
	│ ssh     │ -p cilium-200102 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-200102            │ jenkins │ v1.37.0 │ 13 Oct 25 22:00 UTC │                     │
	│ ssh     │ -p cilium-200102 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-200102            │ jenkins │ v1.37.0 │ 13 Oct 25 22:00 UTC │                     │
	│ ssh     │ -p cilium-200102 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-200102            │ jenkins │ v1.37.0 │ 13 Oct 25 22:00 UTC │                     │
	│ ssh     │ -p cilium-200102 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-200102            │ jenkins │ v1.37.0 │ 13 Oct 25 22:00 UTC │                     │
	│ ssh     │ -p cilium-200102 sudo containerd config dump                                                                                                                                                                                                  │ cilium-200102            │ jenkins │ v1.37.0 │ 13 Oct 25 22:00 UTC │                     │
	│ ssh     │ -p cilium-200102 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-200102            │ jenkins │ v1.37.0 │ 13 Oct 25 22:00 UTC │                     │
	│ ssh     │ -p cilium-200102 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-200102            │ jenkins │ v1.37.0 │ 13 Oct 25 22:00 UTC │                     │
	│ ssh     │ -p cilium-200102 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-200102            │ jenkins │ v1.37.0 │ 13 Oct 25 22:00 UTC │                     │
	│ ssh     │ -p cilium-200102 sudo crio config                                                                                                                                                                                                             │ cilium-200102            │ jenkins │ v1.37.0 │ 13 Oct 25 22:00 UTC │                     │
	│ delete  │ -p cilium-200102                                                                                                                                                                                                                              │ cilium-200102            │ jenkins │ v1.37.0 │ 13 Oct 25 22:00 UTC │ 13 Oct 25 22:00 UTC │
	│ start   │ -p old-k8s-version-534822 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-534822   │ jenkins │ v1.37.0 │ 13 Oct 25 22:00 UTC │ 13 Oct 25 22:01 UTC │
	│ delete  │ -p force-systemd-env-010902                                                                                                                                                                                                                   │ force-systemd-env-010902 │ jenkins │ v1.37.0 │ 13 Oct 25 22:01 UTC │ 13 Oct 25 22:01 UTC │
	│ start   │ -p no-preload-080337 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-080337        │ jenkins │ v1.37.0 │ 13 Oct 25 22:01 UTC │ 13 Oct 25 22:02 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-534822 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-534822   │ jenkins │ v1.37.0 │ 13 Oct 25 22:01 UTC │                     │
	│ stop    │ -p old-k8s-version-534822 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-534822   │ jenkins │ v1.37.0 │ 13 Oct 25 22:01 UTC │ 13 Oct 25 22:02 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-534822 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-534822   │ jenkins │ v1.37.0 │ 13 Oct 25 22:02 UTC │ 13 Oct 25 22:02 UTC │
	│ start   │ -p old-k8s-version-534822 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-534822   │ jenkins │ v1.37.0 │ 13 Oct 25 22:02 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-080337 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-080337        │ jenkins │ v1.37.0 │ 13 Oct 25 22:02 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────────
───┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 22:02:07
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 22:02:07.024852  464437 out.go:360] Setting OutFile to fd 1 ...
	I1013 22:02:07.025148  464437 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:02:07.025160  464437 out.go:374] Setting ErrFile to fd 2...
	I1013 22:02:07.025164  464437 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:02:07.025351  464437 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-226873/.minikube/bin
	I1013 22:02:07.025803  464437 out.go:368] Setting JSON to false
	I1013 22:02:07.027200  464437 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6275,"bootTime":1760386652,"procs":447,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1013 22:02:07.027322  464437 start.go:141] virtualization: kvm guest
	I1013 22:02:07.029305  464437 out.go:179] * [old-k8s-version-534822] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1013 22:02:07.030790  464437 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 22:02:07.030817  464437 notify.go:220] Checking for updates...
	I1013 22:02:07.033355  464437 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 22:02:07.034610  464437 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-226873/kubeconfig
	I1013 22:02:07.035858  464437 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-226873/.minikube
	I1013 22:02:07.037283  464437 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1013 22:02:07.038980  464437 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 22:02:07.040625  464437 config.go:182] Loaded profile config "old-k8s-version-534822": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1013 22:02:07.042312  464437 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1013 22:02:07.043481  464437 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 22:02:07.069091  464437 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1013 22:02:07.069183  464437 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 22:02:07.126703  464437 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:72 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-13 22:02:07.116655708 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652166656 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1013 22:02:07.126830  464437 docker.go:318] overlay module found
	I1013 22:02:07.128493  464437 out.go:179] * Using the docker driver based on existing profile
	I1013 22:02:07.129838  464437 start.go:305] selected driver: docker
	I1013 22:02:07.129856  464437 start.go:925] validating driver "docker" against &{Name:old-k8s-version-534822 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-534822 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 22:02:07.129966  464437 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 22:02:07.130568  464437 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 22:02:07.188014  464437 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:72 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-13 22:02:07.176860173 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652166656 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1013 22:02:07.188304  464437 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 22:02:07.188329  464437 cni.go:84] Creating CNI manager for ""
	I1013 22:02:07.188380  464437 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 22:02:07.188418  464437 start.go:349] cluster config:
	{Name:old-k8s-version-534822 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-534822 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 22:02:07.190252  464437 out.go:179] * Starting "old-k8s-version-534822" primary control-plane node in "old-k8s-version-534822" cluster
	I1013 22:02:07.191614  464437 cache.go:123] Beginning downloading kic base image for docker with crio
	I1013 22:02:07.192881  464437 out.go:179] * Pulling base image v0.0.48-1760363564-21724 ...
	I1013 22:02:07.194068  464437 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1013 22:02:07.194119  464437 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-226873/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1013 22:02:07.194131  464437 cache.go:58] Caching tarball of preloaded images
	I1013 22:02:07.194183  464437 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon
	I1013 22:02:07.194228  464437 preload.go:233] Found /home/jenkins/minikube-integration/21724-226873/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1013 22:02:07.194238  464437 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1013 22:02:07.194329  464437 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/old-k8s-version-534822/config.json ...
	I1013 22:02:07.216529  464437 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon, skipping pull
	I1013 22:02:07.216550  464437 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 exists in daemon, skipping load
	I1013 22:02:07.216566  464437 cache.go:232] Successfully downloaded all kic artifacts
	I1013 22:02:07.216602  464437 start.go:360] acquireMachinesLock for old-k8s-version-534822: {Name:mka35ef823fac124b485a7d553225045ef8cd157 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 22:02:07.216710  464437 start.go:364] duration metric: took 72.447µs to acquireMachinesLock for "old-k8s-version-534822"
	I1013 22:02:07.216732  464437 start.go:96] Skipping create...Using existing machine configuration
	I1013 22:02:07.216738  464437 fix.go:54] fixHost starting: 
	I1013 22:02:07.217024  464437 cli_runner.go:164] Run: docker container inspect old-k8s-version-534822 --format={{.State.Status}}
	I1013 22:02:07.234375  464437 fix.go:112] recreateIfNeeded on old-k8s-version-534822: state=Stopped err=<nil>
	W1013 22:02:07.234426  464437 fix.go:138] unexpected machine state, will restart: <nil>
	I1013 22:02:06.556085  410447 cri.go:89] found id: "d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110"
	I1013 22:02:06.556117  410447 cri.go:89] found id: ""
	I1013 22:02:06.556128  410447 logs.go:282] 1 containers: [d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110]
	I1013 22:02:06.556194  410447 ssh_runner.go:195] Run: which crictl
	I1013 22:02:06.560231  410447 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1013 22:02:06.560297  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1013 22:02:06.590316  410447 cri.go:89] found id: ""
	I1013 22:02:06.590339  410447 logs.go:282] 0 containers: []
	W1013 22:02:06.590347  410447 logs.go:284] No container was found matching "kube-proxy"
	I1013 22:02:06.590353  410447 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1013 22:02:06.590401  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1013 22:02:06.619500  410447 cri.go:89] found id: "6bb08d6a13040e205988dc98d566c74f87d564ee6fc94f5c64022735afe6609d"
	I1013 22:02:06.619530  410447 cri.go:89] found id: ""
	I1013 22:02:06.619542  410447 logs.go:282] 1 containers: [6bb08d6a13040e205988dc98d566c74f87d564ee6fc94f5c64022735afe6609d]
	I1013 22:02:06.619609  410447 ssh_runner.go:195] Run: which crictl
	I1013 22:02:06.623855  410447 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1013 22:02:06.623922  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1013 22:02:06.653736  410447 cri.go:89] found id: ""
	I1013 22:02:06.653764  410447 logs.go:282] 0 containers: []
	W1013 22:02:06.653774  410447 logs.go:284] No container was found matching "kindnet"
	I1013 22:02:06.653782  410447 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1013 22:02:06.653852  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1013 22:02:06.683705  410447 cri.go:89] found id: ""
	I1013 22:02:06.683751  410447 logs.go:282] 0 containers: []
	W1013 22:02:06.683762  410447 logs.go:284] No container was found matching "storage-provisioner"
	I1013 22:02:06.683780  410447 logs.go:123] Gathering logs for kube-apiserver [2bca0d4b76f96c86a214254a1d7f4cc89b2ae472dc6171bf4ead501afcd93b4a] ...
	I1013 22:02:06.683799  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2bca0d4b76f96c86a214254a1d7f4cc89b2ae472dc6171bf4ead501afcd93b4a"
	I1013 22:02:06.722970  410447 logs.go:123] Gathering logs for kube-scheduler [d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110] ...
	I1013 22:02:06.723025  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110"
	I1013 22:02:06.787615  410447 logs.go:123] Gathering logs for kube-controller-manager [6bb08d6a13040e205988dc98d566c74f87d564ee6fc94f5c64022735afe6609d] ...
	I1013 22:02:06.787653  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6bb08d6a13040e205988dc98d566c74f87d564ee6fc94f5c64022735afe6609d"
	I1013 22:02:06.819796  410447 logs.go:123] Gathering logs for kubelet ...
	I1013 22:02:06.819827  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1013 22:02:06.923521  410447 logs.go:123] Gathering logs for dmesg ...
	I1013 22:02:06.923554  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1013 22:02:06.939947  410447 logs.go:123] Gathering logs for describe nodes ...
	I1013 22:02:06.939981  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	
	
	==> CRI-O <==
	Oct 13 22:02:01 no-preload-080337 crio[773]: time="2025-10-13T22:02:01.523287996Z" level=info msg="Starting container: 909cb08205f1bb17a69f7c95dd2c27786c93c169acf1697b2be8edc6551d1f30" id=d8784a67-52f3-41cf-8508-42053a67b3e8 name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 22:02:01 no-preload-080337 crio[773]: time="2025-10-13T22:02:01.52524068Z" level=info msg="Started container" PID=2926 containerID=909cb08205f1bb17a69f7c95dd2c27786c93c169acf1697b2be8edc6551d1f30 description=kube-system/coredns-66bc5c9577-n6t7s/coredns id=d8784a67-52f3-41cf-8508-42053a67b3e8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=70212833b54d4cbc85d4c50619060c26bea38ffb3a187cdb54386cd8c1fc1ae1
	Oct 13 22:02:04 no-preload-080337 crio[773]: time="2025-10-13T22:02:04.218571527Z" level=info msg="Running pod sandbox: default/busybox/POD" id=0b3d68f2-398f-4c85-8c53-c72453818ceb name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 13 22:02:04 no-preload-080337 crio[773]: time="2025-10-13T22:02:04.218722711Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:02:04 no-preload-080337 crio[773]: time="2025-10-13T22:02:04.223774324Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:24cb0c8d81042e019f953aea0123970b557e447e0fa46299aa2c7b006bd13ff7 UID:b8938720-a9c3-41e9-8f57-5cd2919e55d7 NetNS:/var/run/netns/d68bb53a-cfd9-430c-ba80-50543ff94b05 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000d9a8f0}] Aliases:map[]}"
	Oct 13 22:02:04 no-preload-080337 crio[773]: time="2025-10-13T22:02:04.223805111Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 13 22:02:04 no-preload-080337 crio[773]: time="2025-10-13T22:02:04.233701387Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:24cb0c8d81042e019f953aea0123970b557e447e0fa46299aa2c7b006bd13ff7 UID:b8938720-a9c3-41e9-8f57-5cd2919e55d7 NetNS:/var/run/netns/d68bb53a-cfd9-430c-ba80-50543ff94b05 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000d9a8f0}] Aliases:map[]}"
	Oct 13 22:02:04 no-preload-080337 crio[773]: time="2025-10-13T22:02:04.233842365Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 13 22:02:04 no-preload-080337 crio[773]: time="2025-10-13T22:02:04.234674867Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 13 22:02:04 no-preload-080337 crio[773]: time="2025-10-13T22:02:04.235556849Z" level=info msg="Ran pod sandbox 24cb0c8d81042e019f953aea0123970b557e447e0fa46299aa2c7b006bd13ff7 with infra container: default/busybox/POD" id=0b3d68f2-398f-4c85-8c53-c72453818ceb name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 13 22:02:04 no-preload-080337 crio[773]: time="2025-10-13T22:02:04.236784774Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=414956e9-09d3-4f92-bdcd-d97f45b5fd8b name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:02:04 no-preload-080337 crio[773]: time="2025-10-13T22:02:04.236926221Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=414956e9-09d3-4f92-bdcd-d97f45b5fd8b name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:02:04 no-preload-080337 crio[773]: time="2025-10-13T22:02:04.236980986Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=414956e9-09d3-4f92-bdcd-d97f45b5fd8b name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:02:04 no-preload-080337 crio[773]: time="2025-10-13T22:02:04.237596743Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=d8007f15-7e5d-48d8-803d-ca0ac011549b name=/runtime.v1.ImageService/PullImage
	Oct 13 22:02:04 no-preload-080337 crio[773]: time="2025-10-13T22:02:04.239051332Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 13 22:02:04 no-preload-080337 crio[773]: time="2025-10-13T22:02:04.961917405Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=d8007f15-7e5d-48d8-803d-ca0ac011549b name=/runtime.v1.ImageService/PullImage
	Oct 13 22:02:04 no-preload-080337 crio[773]: time="2025-10-13T22:02:04.962521825Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=fbd983c7-5074-40d6-b04f-e07f4308543b name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:02:04 no-preload-080337 crio[773]: time="2025-10-13T22:02:04.963803462Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a79bd48b-e2fd-4166-bb43-48c7409d90a0 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:02:04 no-preload-080337 crio[773]: time="2025-10-13T22:02:04.967383168Z" level=info msg="Creating container: default/busybox/busybox" id=9bae2f90-63b0-4537-a1d9-841e63b9c430 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:02:04 no-preload-080337 crio[773]: time="2025-10-13T22:02:04.968053374Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:02:04 no-preload-080337 crio[773]: time="2025-10-13T22:02:04.972470682Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:02:04 no-preload-080337 crio[773]: time="2025-10-13T22:02:04.973064449Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:02:05 no-preload-080337 crio[773]: time="2025-10-13T22:02:05.004717435Z" level=info msg="Created container ebd92cf7dd6d77b12a06c7793f6c8f69da6b13f2c24c855e6ed5f63e854fa525: default/busybox/busybox" id=9bae2f90-63b0-4537-a1d9-841e63b9c430 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:02:05 no-preload-080337 crio[773]: time="2025-10-13T22:02:05.005346981Z" level=info msg="Starting container: ebd92cf7dd6d77b12a06c7793f6c8f69da6b13f2c24c855e6ed5f63e854fa525" id=41d5456d-b765-49fc-8d31-dae3ff5474bb name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 22:02:05 no-preload-080337 crio[773]: time="2025-10-13T22:02:05.007160967Z" level=info msg="Started container" PID=3001 containerID=ebd92cf7dd6d77b12a06c7793f6c8f69da6b13f2c24c855e6ed5f63e854fa525 description=default/busybox/busybox id=41d5456d-b765-49fc-8d31-dae3ff5474bb name=/runtime.v1.RuntimeService/StartContainer sandboxID=24cb0c8d81042e019f953aea0123970b557e447e0fa46299aa2c7b006bd13ff7
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	ebd92cf7dd6d7       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   24cb0c8d81042       busybox                                     default
	909cb08205f1b       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      10 seconds ago      Running             coredns                   0                   70212833b54d4       coredns-66bc5c9577-n6t7s                    kube-system
	8ed6f120835bc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 seconds ago      Running             storage-provisioner       0                   24bc4c51fcae5       storage-provisioner                         kube-system
	d70fa980f7664       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    21 seconds ago      Running             kindnet-cni               0                   bc696bce5b67a       kindnet-74766                               kube-system
	3c530f7c770b3       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      23 seconds ago      Running             kube-proxy                0                   7a9514da60f3d       kube-proxy-2scrx                            kube-system
	c0173cdfe4887       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      33 seconds ago      Running             etcd                      0                   7157aa1225b5e       etcd-no-preload-080337                      kube-system
	5158fb9d96147       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      33 seconds ago      Running             kube-controller-manager   0                   6655dc34dc87b       kube-controller-manager-no-preload-080337   kube-system
	31da5b023355e       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      33 seconds ago      Running             kube-scheduler            0                   bf019f894cc8d       kube-scheduler-no-preload-080337            kube-system
	2159c18dd47d2       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      33 seconds ago      Running             kube-apiserver            0                   0a3f267a487f6       kube-apiserver-no-preload-080337            kube-system
	
	
	==> coredns [909cb08205f1bb17a69f7c95dd2c27786c93c169acf1697b2be8edc6551d1f30] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:42000 - 36394 "HINFO IN 3326550971167685911.6506575590682688755. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.112369811s
	
	
	==> describe nodes <==
	Name:               no-preload-080337
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-080337
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=18273cc699fc357fbf4f93654efe4966698a9f22
	                    minikube.k8s.io/name=no-preload-080337
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_13T22_01_43_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 13 Oct 2025 22:01:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-080337
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 13 Oct 2025 22:02:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 13 Oct 2025 22:02:03 +0000   Mon, 13 Oct 2025 22:01:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 13 Oct 2025 22:02:03 +0000   Mon, 13 Oct 2025 22:01:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 13 Oct 2025 22:02:03 +0000   Mon, 13 Oct 2025 22:01:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 13 Oct 2025 22:02:03 +0000   Mon, 13 Oct 2025 22:02:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    no-preload-080337
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863444Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863444Ki
	  pods:               110
	System Info:
	  Machine ID:                 66f4c9907daa2bafbf78246f68ed04ba
	  System UUID:                b626e944-ef41-4bbd-9e16-cce1552f60c7
	  Boot ID:                    981b04c4-08c1-4321-af44-0fa889f5f1d8
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-66bc5c9577-n6t7s                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     24s
	  kube-system                 etcd-no-preload-080337                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         30s
	  kube-system                 kindnet-74766                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      24s
	  kube-system                 kube-apiserver-no-preload-080337             250m (3%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-controller-manager-no-preload-080337    200m (2%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-proxy-2scrx                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	  kube-system                 kube-scheduler-no-preload-080337             100m (1%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 23s   kube-proxy       
	  Normal  Starting                 30s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  30s   kubelet          Node no-preload-080337 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30s   kubelet          Node no-preload-080337 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30s   kubelet          Node no-preload-080337 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           25s   node-controller  Node no-preload-080337 event: Registered Node no-preload-080337 in Controller
	  Normal  NodeReady                11s   kubelet          Node no-preload-080337 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.099627] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.027042] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.612816] kauditd_printk_skb: 47 callbacks suppressed
	[Oct13 21:21] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +1.026013] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +1.023870] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +1.023931] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +1.023844] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +1.023883] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +2.047734] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +4.031606] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +8.511203] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[ +16.382178] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[Oct13 21:22] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	
	
	==> etcd [c0173cdfe4887dfa04d48f17aa25c540e8992be36556e56302f9d1c517ad3c16] <==
	{"level":"warn","ts":"2025-10-13T22:01:39.671633Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:01:39.679760Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:01:39.687111Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:01:39.694146Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:01:39.701270Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:01:39.708631Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:01:39.715740Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:01:39.723216Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:01:39.730854Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:01:39.738595Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:01:39.745243Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:01:39.752936Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:01:39.761308Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:01:39.768481Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:01:39.776226Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:01:39.783712Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:01:39.791555Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59784","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:01:39.798287Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:01:39.805601Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:01:39.813071Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:01:39.823872Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:01:39.831256Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:01:39.838459Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:01:39.894216Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59922","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-13T22:01:45.510829Z","caller":"traceutil/trace.go:172","msg":"trace[316527101] transaction","detail":"{read_only:false; response_revision:281; number_of_response:1; }","duration":"104.961662ms","start":"2025-10-13T22:01:45.405845Z","end":"2025-10-13T22:01:45.510807Z","steps":["trace[316527101] 'process raft request'  (duration: 104.837032ms)"],"step_count":1}
	
	
	==> kernel <==
	 22:02:12 up  1:44,  0 user,  load average: 3.88, 3.28, 6.06
	Linux no-preload-080337 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d70fa980f766495407704a38702f1cfcb8e069794d3c882203c28ff08b210815] <==
	I1013 22:01:50.905169       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1013 22:01:50.905431       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1013 22:01:50.905590       1 main.go:148] setting mtu 1500 for CNI 
	I1013 22:01:50.905612       1 main.go:178] kindnetd IP family: "ipv4"
	I1013 22:01:50.905631       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-13T22:01:51Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1013 22:01:51.105551       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1013 22:01:51.105653       1 controller.go:381] "Waiting for informer caches to sync"
	I1013 22:01:51.105671       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1013 22:01:51.205581       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1013 22:01:51.407051       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1013 22:01:51.407074       1 metrics.go:72] Registering metrics
	I1013 22:01:51.407134       1 controller.go:711] "Syncing nftables rules"
	I1013 22:02:01.111266       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1013 22:02:01.111329       1 main.go:301] handling current node
	I1013 22:02:11.109066       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1013 22:02:11.109127       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2159c18dd47d29a8f71979edb21a24764c6c0fb9ee68d3285a6bc5e55b273b76] <==
	I1013 22:01:40.416650       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1013 22:01:40.416730       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1013 22:01:40.417736       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1013 22:01:40.418020       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1013 22:01:40.418231       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1013 22:01:40.434562       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1013 22:01:40.597716       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1013 22:01:41.321265       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1013 22:01:41.326210       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1013 22:01:41.326227       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1013 22:01:41.766240       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1013 22:01:41.812420       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1013 22:01:41.918303       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1013 22:01:41.923953       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1013 22:01:41.924952       1 controller.go:667] quota admission added evaluator for: endpoints
	I1013 22:01:41.929011       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1013 22:01:42.358619       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1013 22:01:42.885406       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1013 22:01:42.895064       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1013 22:01:42.902341       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1013 22:01:48.012889       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1013 22:01:48.018022       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1013 22:01:48.211259       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1013 22:01:48.462087       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1013 22:02:11.004362       1 conn.go:339] Error on socket receive: read tcp 192.168.94.2:8443->192.168.94.1:45766: use of closed network connection
	
	
	==> kube-controller-manager [5158fb9d9614774a78d66f82438ece2a54db09b62141c6395c522f613ffb5165] <==
	I1013 22:01:47.357793       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1013 22:01:47.357820       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1013 22:01:47.357834       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1013 22:01:47.357834       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1013 22:01:47.357886       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1013 22:01:47.357905       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1013 22:01:47.358007       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1013 22:01:47.359136       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1013 22:01:47.359152       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1013 22:01:47.359177       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1013 22:01:47.359206       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1013 22:01:47.359224       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1013 22:01:47.359304       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1013 22:01:47.359314       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1013 22:01:47.360256       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1013 22:01:47.360478       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1013 22:01:47.360604       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-080337"
	I1013 22:01:47.360661       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1013 22:01:47.364441       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1013 22:01:47.373646       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1013 22:01:47.376968       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 22:01:47.383375       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 22:01:47.383397       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1013 22:01:47.383404       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1013 22:02:02.362446       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [3c530f7c770b333dc2649357167b40e7d7aea470ad624d102b2562a8f57df35f] <==
	I1013 22:01:48.683649       1 server_linux.go:53] "Using iptables proxy"
	I1013 22:01:48.748158       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1013 22:01:48.849167       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1013 22:01:48.849221       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1013 22:01:48.849307       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1013 22:01:48.871164       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1013 22:01:48.871220       1 server_linux.go:132] "Using iptables Proxier"
	I1013 22:01:48.877279       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1013 22:01:48.877671       1 server.go:527] "Version info" version="v1.34.1"
	I1013 22:01:48.877710       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 22:01:48.879491       1 config.go:403] "Starting serviceCIDR config controller"
	I1013 22:01:48.879510       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1013 22:01:48.879527       1 config.go:200] "Starting service config controller"
	I1013 22:01:48.879532       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1013 22:01:48.879545       1 config.go:106] "Starting endpoint slice config controller"
	I1013 22:01:48.879550       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1013 22:01:48.879624       1 config.go:309] "Starting node config controller"
	I1013 22:01:48.879640       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1013 22:01:48.879649       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1013 22:01:48.980402       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1013 22:01:48.980424       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1013 22:01:48.980452       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [31da5b023355e9d39b43cde9db4375a9782a618ca08adf2040c681c15b6b117e] <==
	E1013 22:01:40.369870       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1013 22:01:40.370015       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1013 22:01:40.369930       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1013 22:01:40.370009       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1013 22:01:40.369880       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1013 22:01:40.370114       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1013 22:01:40.370121       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1013 22:01:40.370198       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1013 22:01:40.370217       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1013 22:01:40.370255       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1013 22:01:40.370287       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1013 22:01:40.370320       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1013 22:01:40.370321       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1013 22:01:40.370724       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1013 22:01:40.370824       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1013 22:01:41.192886       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1013 22:01:41.204175       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1013 22:01:41.250556       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1013 22:01:41.314163       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1013 22:01:41.352304       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1013 22:01:41.432970       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1013 22:01:41.534454       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1013 22:01:41.594692       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1013 22:01:41.606858       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	I1013 22:01:43.266511       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 13 22:01:43 no-preload-080337 kubelet[2318]: E1013 22:01:43.746268    2318 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-no-preload-080337\" already exists" pod="kube-system/kube-controller-manager-no-preload-080337"
	Oct 13 22:01:43 no-preload-080337 kubelet[2318]: I1013 22:01:43.771365    2318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-no-preload-080337" podStartSLOduration=1.77134066 podStartE2EDuration="1.77134066s" podCreationTimestamp="2025-10-13 22:01:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 22:01:43.759183442 +0000 UTC m=+1.126298792" watchObservedRunningTime="2025-10-13 22:01:43.77134066 +0000 UTC m=+1.138456011"
	Oct 13 22:01:43 no-preload-080337 kubelet[2318]: I1013 22:01:43.771501    2318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-no-preload-080337" podStartSLOduration=1.77149423 podStartE2EDuration="1.77149423s" podCreationTimestamp="2025-10-13 22:01:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 22:01:43.771453925 +0000 UTC m=+1.138569278" watchObservedRunningTime="2025-10-13 22:01:43.77149423 +0000 UTC m=+1.138609572"
	Oct 13 22:01:43 no-preload-080337 kubelet[2318]: I1013 22:01:43.791481    2318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-no-preload-080337" podStartSLOduration=1.7914550679999999 podStartE2EDuration="1.791455068s" podCreationTimestamp="2025-10-13 22:01:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 22:01:43.782763236 +0000 UTC m=+1.149878579" watchObservedRunningTime="2025-10-13 22:01:43.791455068 +0000 UTC m=+1.158570421"
	Oct 13 22:01:43 no-preload-080337 kubelet[2318]: I1013 22:01:43.791640    2318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-no-preload-080337" podStartSLOduration=1.791632962 podStartE2EDuration="1.791632962s" podCreationTimestamp="2025-10-13 22:01:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 22:01:43.791408932 +0000 UTC m=+1.158524285" watchObservedRunningTime="2025-10-13 22:01:43.791632962 +0000 UTC m=+1.158748313"
	Oct 13 22:01:47 no-preload-080337 kubelet[2318]: I1013 22:01:47.348678    2318 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 13 22:01:47 no-preload-080337 kubelet[2318]: I1013 22:01:47.349518    2318 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 13 22:01:48 no-preload-080337 kubelet[2318]: I1013 22:01:48.244821    2318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/055d27fc-daa6-45c6-b0a7-492a6eb17617-xtables-lock\") pod \"kindnet-74766\" (UID: \"055d27fc-daa6-45c6-b0a7-492a6eb17617\") " pod="kube-system/kindnet-74766"
	Oct 13 22:01:48 no-preload-080337 kubelet[2318]: I1013 22:01:48.244893    2318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m79lr\" (UniqueName: \"kubernetes.io/projected/055d27fc-daa6-45c6-b0a7-492a6eb17617-kube-api-access-m79lr\") pod \"kindnet-74766\" (UID: \"055d27fc-daa6-45c6-b0a7-492a6eb17617\") " pod="kube-system/kindnet-74766"
	Oct 13 22:01:48 no-preload-080337 kubelet[2318]: I1013 22:01:48.245027    2318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5f331f5d-7309-4de9-a64c-41a65ee37e7d-lib-modules\") pod \"kube-proxy-2scrx\" (UID: \"5f331f5d-7309-4de9-a64c-41a65ee37e7d\") " pod="kube-system/kube-proxy-2scrx"
	Oct 13 22:01:48 no-preload-080337 kubelet[2318]: I1013 22:01:48.245069    2318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xp8p2\" (UniqueName: \"kubernetes.io/projected/5f331f5d-7309-4de9-a64c-41a65ee37e7d-kube-api-access-xp8p2\") pod \"kube-proxy-2scrx\" (UID: \"5f331f5d-7309-4de9-a64c-41a65ee37e7d\") " pod="kube-system/kube-proxy-2scrx"
	Oct 13 22:01:48 no-preload-080337 kubelet[2318]: I1013 22:01:48.245099    2318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5f331f5d-7309-4de9-a64c-41a65ee37e7d-xtables-lock\") pod \"kube-proxy-2scrx\" (UID: \"5f331f5d-7309-4de9-a64c-41a65ee37e7d\") " pod="kube-system/kube-proxy-2scrx"
	Oct 13 22:01:48 no-preload-080337 kubelet[2318]: I1013 22:01:48.245122    2318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/055d27fc-daa6-45c6-b0a7-492a6eb17617-lib-modules\") pod \"kindnet-74766\" (UID: \"055d27fc-daa6-45c6-b0a7-492a6eb17617\") " pod="kube-system/kindnet-74766"
	Oct 13 22:01:48 no-preload-080337 kubelet[2318]: I1013 22:01:48.245147    2318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5f331f5d-7309-4de9-a64c-41a65ee37e7d-kube-proxy\") pod \"kube-proxy-2scrx\" (UID: \"5f331f5d-7309-4de9-a64c-41a65ee37e7d\") " pod="kube-system/kube-proxy-2scrx"
	Oct 13 22:01:48 no-preload-080337 kubelet[2318]: I1013 22:01:48.245161    2318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/055d27fc-daa6-45c6-b0a7-492a6eb17617-cni-cfg\") pod \"kindnet-74766\" (UID: \"055d27fc-daa6-45c6-b0a7-492a6eb17617\") " pod="kube-system/kindnet-74766"
	Oct 13 22:01:49 no-preload-080337 kubelet[2318]: I1013 22:01:49.283397    2318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2scrx" podStartSLOduration=1.28337736 podStartE2EDuration="1.28337736s" podCreationTimestamp="2025-10-13 22:01:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 22:01:48.766147047 +0000 UTC m=+6.133262397" watchObservedRunningTime="2025-10-13 22:01:49.28337736 +0000 UTC m=+6.650492711"
	Oct 13 22:01:50 no-preload-080337 kubelet[2318]: I1013 22:01:50.769802    2318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-74766" podStartSLOduration=0.737088539 podStartE2EDuration="2.769785398s" podCreationTimestamp="2025-10-13 22:01:48 +0000 UTC" firstStartedPulling="2025-10-13 22:01:48.550867024 +0000 UTC m=+5.917982373" lastFinishedPulling="2025-10-13 22:01:50.583563897 +0000 UTC m=+7.950679232" observedRunningTime="2025-10-13 22:01:50.769568803 +0000 UTC m=+8.136684154" watchObservedRunningTime="2025-10-13 22:01:50.769785398 +0000 UTC m=+8.136900754"
	Oct 13 22:02:01 no-preload-080337 kubelet[2318]: I1013 22:02:01.139692    2318 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 13 22:02:01 no-preload-080337 kubelet[2318]: I1013 22:02:01.247180    2318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/65d002b2-28ab-45b2-aa56-7d828173f096-config-volume\") pod \"coredns-66bc5c9577-n6t7s\" (UID: \"65d002b2-28ab-45b2-aa56-7d828173f096\") " pod="kube-system/coredns-66bc5c9577-n6t7s"
	Oct 13 22:02:01 no-preload-080337 kubelet[2318]: I1013 22:02:01.247240    2318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9bpz\" (UniqueName: \"kubernetes.io/projected/65d002b2-28ab-45b2-aa56-7d828173f096-kube-api-access-x9bpz\") pod \"coredns-66bc5c9577-n6t7s\" (UID: \"65d002b2-28ab-45b2-aa56-7d828173f096\") " pod="kube-system/coredns-66bc5c9577-n6t7s"
	Oct 13 22:02:01 no-preload-080337 kubelet[2318]: I1013 22:02:01.247326    2318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/f65bcb32-fd36-4634-b3f1-b93eb14848bb-tmp\") pod \"storage-provisioner\" (UID: \"f65bcb32-fd36-4634-b3f1-b93eb14848bb\") " pod="kube-system/storage-provisioner"
	Oct 13 22:02:01 no-preload-080337 kubelet[2318]: I1013 22:02:01.247388    2318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whxjj\" (UniqueName: \"kubernetes.io/projected/f65bcb32-fd36-4634-b3f1-b93eb14848bb-kube-api-access-whxjj\") pod \"storage-provisioner\" (UID: \"f65bcb32-fd36-4634-b3f1-b93eb14848bb\") " pod="kube-system/storage-provisioner"
	Oct 13 22:02:01 no-preload-080337 kubelet[2318]: I1013 22:02:01.807862    2318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-n6t7s" podStartSLOduration=13.8078422 podStartE2EDuration="13.8078422s" podCreationTimestamp="2025-10-13 22:01:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 22:02:01.807724755 +0000 UTC m=+19.174840106" watchObservedRunningTime="2025-10-13 22:02:01.8078422 +0000 UTC m=+19.174957552"
	Oct 13 22:02:01 no-preload-080337 kubelet[2318]: I1013 22:02:01.817957    2318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.817938677 podStartE2EDuration="13.817938677s" podCreationTimestamp="2025-10-13 22:01:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 22:02:01.817365547 +0000 UTC m=+19.184480898" watchObservedRunningTime="2025-10-13 22:02:01.817938677 +0000 UTC m=+19.185054027"
	Oct 13 22:02:03 no-preload-080337 kubelet[2318]: I1013 22:02:03.962381    2318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkd22\" (UniqueName: \"kubernetes.io/projected/b8938720-a9c3-41e9-8f57-5cd2919e55d7-kube-api-access-gkd22\") pod \"busybox\" (UID: \"b8938720-a9c3-41e9-8f57-5cd2919e55d7\") " pod="default/busybox"
	
	
	==> storage-provisioner [8ed6f120835bca37e955cc7cd4648fe4d63b7b55b4224d73f7ad602d26921102] <==
	I1013 22:02:01.530917       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1013 22:02:01.540101       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1013 22:02:01.540220       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1013 22:02:01.542241       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:02:01.547082       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1013 22:02:01.547336       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1013 22:02:01.547582       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-080337_ef41c869-ef59-4e13-9934-f07d6b7ed949!
	I1013 22:02:01.547615       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f7f52034-0e22-43b7-ac83-32c79d19cae9", APIVersion:"v1", ResourceVersion:"407", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-080337_ef41c869-ef59-4e13-9934-f07d6b7ed949 became leader
	W1013 22:02:01.549386       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:02:01.552983       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1013 22:02:01.647825       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-080337_ef41c869-ef59-4e13-9934-f07d6b7ed949!
	W1013 22:02:03.556702       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:02:03.561889       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:02:05.565389       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:02:05.569291       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:02:07.572294       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:02:07.576292       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:02:09.579746       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:02:09.585442       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:02:11.589367       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:02:11.594031       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-080337 -n no-preload-080337
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-080337 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (6.85s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-534822 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p old-k8s-version-534822 --alsologtostderr -v=1: exit status 80 (2.542678632s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-534822 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 22:03:01.277886  472871 out.go:360] Setting OutFile to fd 1 ...
	I1013 22:03:01.278026  472871 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:03:01.278035  472871 out.go:374] Setting ErrFile to fd 2...
	I1013 22:03:01.278040  472871 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:03:01.278283  472871 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-226873/.minikube/bin
	I1013 22:03:01.278577  472871 out.go:368] Setting JSON to false
	I1013 22:03:01.278639  472871 mustload.go:65] Loading cluster: old-k8s-version-534822
	I1013 22:03:01.279035  472871 config.go:182] Loaded profile config "old-k8s-version-534822": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1013 22:03:01.279516  472871 cli_runner.go:164] Run: docker container inspect old-k8s-version-534822 --format={{.State.Status}}
	I1013 22:03:01.308669  472871 host.go:66] Checking if "old-k8s-version-534822" exists ...
	I1013 22:03:01.309085  472871 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 22:03:01.396596  472871 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:77 OomKillDisable:false NGoroutines:86 SystemTime:2025-10-13 22:03:01.382773853 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652166656 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1013 22:03:01.397475  472871 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1758198818-20370/minikube-v1.37.0-1758198818-20370-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1758198818-20370-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-534822 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1013 22:03:01.399475  472871 out.go:179] * Pausing node old-k8s-version-534822 ... 
	I1013 22:03:01.401354  472871 host.go:66] Checking if "old-k8s-version-534822" exists ...
	I1013 22:03:01.401782  472871 ssh_runner.go:195] Run: systemctl --version
	I1013 22:03:01.401844  472871 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-534822
	I1013 22:03:01.426411  472871 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/old-k8s-version-534822/id_rsa Username:docker}
	I1013 22:03:01.536370  472871 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 22:03:01.551291  472871 pause.go:52] kubelet running: true
	I1013 22:03:01.551358  472871 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1013 22:03:01.710082  472871 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1013 22:03:01.710170  472871 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1013 22:03:01.783063  472871 cri.go:89] found id: "37228398f0d5b8da9cd2c42cbd3f96b5b2291545f591979cceded9621f58cafc"
	I1013 22:03:01.783084  472871 cri.go:89] found id: "c111ec4bcc5b125ec48f663bc7cd06e29efb01497a18ce0020efd3eaff6f1fd1"
	I1013 22:03:01.783088  472871 cri.go:89] found id: "c5ef5eaa114969042b86e33d9108fd252b477bdb7ed4ddd8c2f43db87e5079a9"
	I1013 22:03:01.783091  472871 cri.go:89] found id: "b624ac084d77afef6c81464d48d1eb794d43f2a9198b78ebfa5018b74a539084"
	I1013 22:03:01.783093  472871 cri.go:89] found id: "e04dc6fa107a5c56236b3d443172131ce65ded3d8adf9775024f6a49e9772e8e"
	I1013 22:03:01.783096  472871 cri.go:89] found id: "8f0311ea43bb503a3f6cef3444dce8ce4614329582f6cd4bd7b1f02c9bf17bb2"
	I1013 22:03:01.783098  472871 cri.go:89] found id: "a8a18841fbef49205a8df405497d96c6bb674b58aa7107bc74083ff4a27bf0db"
	I1013 22:03:01.783100  472871 cri.go:89] found id: "90f9e9007916b0a8ae74e840abbcb9cbfc1ce8e26a1eb71f02c223f888d9a6d6"
	I1013 22:03:01.783103  472871 cri.go:89] found id: "fef3d5bba99429e04d8f13cbaad68788e9213e26c246beaa2f1d3bea2b92c9f2"
	I1013 22:03:01.783107  472871 cri.go:89] found id: "24a60a5877551a0b14faf87f3bd9b57fc758102f99a010ec769dd51aefc1de46"
	I1013 22:03:01.783110  472871 cri.go:89] found id: "bd7cae91130f04be30cc57b1982ae36832e3a5f9220822a6aef22201699250b7"
	I1013 22:03:01.783112  472871 cri.go:89] found id: ""
	I1013 22:03:01.783150  472871 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 22:03:01.797166  472871 retry.go:31] will retry after 185.870616ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:03:01Z" level=error msg="open /run/runc: no such file or directory"
	I1013 22:03:01.983685  472871 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 22:03:01.998261  472871 pause.go:52] kubelet running: false
	I1013 22:03:01.998319  472871 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1013 22:03:02.147808  472871 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1013 22:03:02.147897  472871 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1013 22:03:02.218495  472871 cri.go:89] found id: "37228398f0d5b8da9cd2c42cbd3f96b5b2291545f591979cceded9621f58cafc"
	I1013 22:03:02.218514  472871 cri.go:89] found id: "c111ec4bcc5b125ec48f663bc7cd06e29efb01497a18ce0020efd3eaff6f1fd1"
	I1013 22:03:02.218519  472871 cri.go:89] found id: "c5ef5eaa114969042b86e33d9108fd252b477bdb7ed4ddd8c2f43db87e5079a9"
	I1013 22:03:02.218522  472871 cri.go:89] found id: "b624ac084d77afef6c81464d48d1eb794d43f2a9198b78ebfa5018b74a539084"
	I1013 22:03:02.218524  472871 cri.go:89] found id: "e04dc6fa107a5c56236b3d443172131ce65ded3d8adf9775024f6a49e9772e8e"
	I1013 22:03:02.218528  472871 cri.go:89] found id: "8f0311ea43bb503a3f6cef3444dce8ce4614329582f6cd4bd7b1f02c9bf17bb2"
	I1013 22:03:02.218530  472871 cri.go:89] found id: "a8a18841fbef49205a8df405497d96c6bb674b58aa7107bc74083ff4a27bf0db"
	I1013 22:03:02.218533  472871 cri.go:89] found id: "90f9e9007916b0a8ae74e840abbcb9cbfc1ce8e26a1eb71f02c223f888d9a6d6"
	I1013 22:03:02.218536  472871 cri.go:89] found id: "fef3d5bba99429e04d8f13cbaad68788e9213e26c246beaa2f1d3bea2b92c9f2"
	I1013 22:03:02.218546  472871 cri.go:89] found id: "24a60a5877551a0b14faf87f3bd9b57fc758102f99a010ec769dd51aefc1de46"
	I1013 22:03:02.218549  472871 cri.go:89] found id: "bd7cae91130f04be30cc57b1982ae36832e3a5f9220822a6aef22201699250b7"
	I1013 22:03:02.218553  472871 cri.go:89] found id: ""
	I1013 22:03:02.218597  472871 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 22:03:02.230815  472871 retry.go:31] will retry after 196.038748ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:03:02Z" level=error msg="open /run/runc: no such file or directory"
	I1013 22:03:02.427144  472871 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 22:03:02.443943  472871 pause.go:52] kubelet running: false
	I1013 22:03:02.444015  472871 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1013 22:03:02.618599  472871 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1013 22:03:02.618695  472871 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1013 22:03:02.703162  472871 cri.go:89] found id: "37228398f0d5b8da9cd2c42cbd3f96b5b2291545f591979cceded9621f58cafc"
	I1013 22:03:02.703192  472871 cri.go:89] found id: "c111ec4bcc5b125ec48f663bc7cd06e29efb01497a18ce0020efd3eaff6f1fd1"
	I1013 22:03:02.703198  472871 cri.go:89] found id: "c5ef5eaa114969042b86e33d9108fd252b477bdb7ed4ddd8c2f43db87e5079a9"
	I1013 22:03:02.703203  472871 cri.go:89] found id: "b624ac084d77afef6c81464d48d1eb794d43f2a9198b78ebfa5018b74a539084"
	I1013 22:03:02.703206  472871 cri.go:89] found id: "e04dc6fa107a5c56236b3d443172131ce65ded3d8adf9775024f6a49e9772e8e"
	I1013 22:03:02.703210  472871 cri.go:89] found id: "8f0311ea43bb503a3f6cef3444dce8ce4614329582f6cd4bd7b1f02c9bf17bb2"
	I1013 22:03:02.703214  472871 cri.go:89] found id: "a8a18841fbef49205a8df405497d96c6bb674b58aa7107bc74083ff4a27bf0db"
	I1013 22:03:02.703219  472871 cri.go:89] found id: "90f9e9007916b0a8ae74e840abbcb9cbfc1ce8e26a1eb71f02c223f888d9a6d6"
	I1013 22:03:02.703253  472871 cri.go:89] found id: "fef3d5bba99429e04d8f13cbaad68788e9213e26c246beaa2f1d3bea2b92c9f2"
	I1013 22:03:02.703265  472871 cri.go:89] found id: "24a60a5877551a0b14faf87f3bd9b57fc758102f99a010ec769dd51aefc1de46"
	I1013 22:03:02.703270  472871 cri.go:89] found id: "bd7cae91130f04be30cc57b1982ae36832e3a5f9220822a6aef22201699250b7"
	I1013 22:03:02.703275  472871 cri.go:89] found id: ""
	I1013 22:03:02.703342  472871 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 22:03:02.718311  472871 retry.go:31] will retry after 794.115071ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:03:02Z" level=error msg="open /run/runc: no such file or directory"
	I1013 22:03:03.513127  472871 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 22:03:03.526760  472871 pause.go:52] kubelet running: false
	I1013 22:03:03.526814  472871 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1013 22:03:03.670497  472871 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1013 22:03:03.670577  472871 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1013 22:03:03.740818  472871 cri.go:89] found id: "37228398f0d5b8da9cd2c42cbd3f96b5b2291545f591979cceded9621f58cafc"
	I1013 22:03:03.740851  472871 cri.go:89] found id: "c111ec4bcc5b125ec48f663bc7cd06e29efb01497a18ce0020efd3eaff6f1fd1"
	I1013 22:03:03.740856  472871 cri.go:89] found id: "c5ef5eaa114969042b86e33d9108fd252b477bdb7ed4ddd8c2f43db87e5079a9"
	I1013 22:03:03.740859  472871 cri.go:89] found id: "b624ac084d77afef6c81464d48d1eb794d43f2a9198b78ebfa5018b74a539084"
	I1013 22:03:03.740862  472871 cri.go:89] found id: "e04dc6fa107a5c56236b3d443172131ce65ded3d8adf9775024f6a49e9772e8e"
	I1013 22:03:03.740866  472871 cri.go:89] found id: "8f0311ea43bb503a3f6cef3444dce8ce4614329582f6cd4bd7b1f02c9bf17bb2"
	I1013 22:03:03.740871  472871 cri.go:89] found id: "a8a18841fbef49205a8df405497d96c6bb674b58aa7107bc74083ff4a27bf0db"
	I1013 22:03:03.740874  472871 cri.go:89] found id: "90f9e9007916b0a8ae74e840abbcb9cbfc1ce8e26a1eb71f02c223f888d9a6d6"
	I1013 22:03:03.740878  472871 cri.go:89] found id: "fef3d5bba99429e04d8f13cbaad68788e9213e26c246beaa2f1d3bea2b92c9f2"
	I1013 22:03:03.740893  472871 cri.go:89] found id: "24a60a5877551a0b14faf87f3bd9b57fc758102f99a010ec769dd51aefc1de46"
	I1013 22:03:03.740897  472871 cri.go:89] found id: "bd7cae91130f04be30cc57b1982ae36832e3a5f9220822a6aef22201699250b7"
	I1013 22:03:03.740901  472871 cri.go:89] found id: ""
	I1013 22:03:03.740946  472871 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 22:03:03.757479  472871 out.go:203] 
	W1013 22:03:03.758876  472871 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:03:03Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:03:03Z" level=error msg="open /run/runc: no such file or directory"
	
	W1013 22:03:03.758894  472871 out.go:285] * 
	* 
	W1013 22:03:03.763280  472871 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1013 22:03:03.764698  472871 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p old-k8s-version-534822 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-534822
helpers_test.go:243: (dbg) docker inspect old-k8s-version-534822:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "cebe2b59b715e11d4a1d157870b568e5326119ff8587e8bafeeae046fb0d5ef4",
	        "Created": "2025-10-13T22:00:56.40821218Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 464639,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-13T22:02:07.261599258Z",
	            "FinishedAt": "2025-10-13T22:02:06.431762942Z"
	        },
	        "Image": "sha256:496f866fde942b6f85dc3d23428f27ed11a5cd69b522133c43b8dc97e7575c9e",
	        "ResolvConfPath": "/var/lib/docker/containers/cebe2b59b715e11d4a1d157870b568e5326119ff8587e8bafeeae046fb0d5ef4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/cebe2b59b715e11d4a1d157870b568e5326119ff8587e8bafeeae046fb0d5ef4/hostname",
	        "HostsPath": "/var/lib/docker/containers/cebe2b59b715e11d4a1d157870b568e5326119ff8587e8bafeeae046fb0d5ef4/hosts",
	        "LogPath": "/var/lib/docker/containers/cebe2b59b715e11d4a1d157870b568e5326119ff8587e8bafeeae046fb0d5ef4/cebe2b59b715e11d4a1d157870b568e5326119ff8587e8bafeeae046fb0d5ef4-json.log",
	        "Name": "/old-k8s-version-534822",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-534822:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-534822",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "cebe2b59b715e11d4a1d157870b568e5326119ff8587e8bafeeae046fb0d5ef4",
	                "LowerDir": "/var/lib/docker/overlay2/a3eced189884b262317386087129a706fd41bab22a49fa1875ac763be6612488-init/diff:/var/lib/docker/overlay2/d6236b573fee274727d414fe7dfb0718c2f1a4b8ebed995b4196b3231a8d31a4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a3eced189884b262317386087129a706fd41bab22a49fa1875ac763be6612488/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a3eced189884b262317386087129a706fd41bab22a49fa1875ac763be6612488/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a3eced189884b262317386087129a706fd41bab22a49fa1875ac763be6612488/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-534822",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-534822/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-534822",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-534822",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-534822",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fa70e4e266e2d0cb1159049be83189903786428371915674620b0ef8805a0e9c",
	            "SandboxKey": "/var/run/docker/netns/fa70e4e266e2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33063"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33064"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33067"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33065"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33066"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-534822": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:6f:c3:e0:5f:31",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4d1498e7b1a230857c86022c34281ff31ff5a8fd51b2621fd4063f6a1e47ae63",
	                    "EndpointID": "d5d66a37f5ee7bf00ef8e83eda6b4dd34854594c06e449813f0e3467343431c4",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-534822",
	                        "cebe2b59b715"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-534822 -n old-k8s-version-534822
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-534822 -n old-k8s-version-534822: exit status 2 (337.212729ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-534822 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-534822 logs -n 25: (1.294615873s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────────
───┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────────
───┤
	│ ssh     │ -p cilium-200102 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ cilium-200102            │ jenkins │ v1.37.0 │ 13 Oct 25 22:00 UTC │                     │
	│ ssh     │ -p cilium-200102 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-200102            │ jenkins │ v1.37.0 │ 13 Oct 25 22:00 UTC │                     │
	│ ssh     │ -p cilium-200102 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-200102            │ jenkins │ v1.37.0 │ 13 Oct 25 22:00 UTC │                     │
	│ ssh     │ -p cilium-200102 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-200102            │ jenkins │ v1.37.0 │ 13 Oct 25 22:00 UTC │                     │
	│ ssh     │ -p cilium-200102 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-200102            │ jenkins │ v1.37.0 │ 13 Oct 25 22:00 UTC │                     │
	│ ssh     │ -p cilium-200102 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-200102            │ jenkins │ v1.37.0 │ 13 Oct 25 22:00 UTC │                     │
	│ ssh     │ -p cilium-200102 sudo containerd config dump                                                                                                                                                                                                  │ cilium-200102            │ jenkins │ v1.37.0 │ 13 Oct 25 22:00 UTC │                     │
	│ ssh     │ -p cilium-200102 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-200102            │ jenkins │ v1.37.0 │ 13 Oct 25 22:00 UTC │                     │
	│ ssh     │ -p cilium-200102 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-200102            │ jenkins │ v1.37.0 │ 13 Oct 25 22:00 UTC │                     │
	│ ssh     │ -p cilium-200102 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-200102            │ jenkins │ v1.37.0 │ 13 Oct 25 22:00 UTC │                     │
	│ ssh     │ -p cilium-200102 sudo crio config                                                                                                                                                                                                             │ cilium-200102            │ jenkins │ v1.37.0 │ 13 Oct 25 22:00 UTC │                     │
	│ delete  │ -p cilium-200102                                                                                                                                                                                                                              │ cilium-200102            │ jenkins │ v1.37.0 │ 13 Oct 25 22:00 UTC │ 13 Oct 25 22:00 UTC │
	│ start   │ -p old-k8s-version-534822 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-534822   │ jenkins │ v1.37.0 │ 13 Oct 25 22:00 UTC │ 13 Oct 25 22:01 UTC │
	│ delete  │ -p force-systemd-env-010902                                                                                                                                                                                                                   │ force-systemd-env-010902 │ jenkins │ v1.37.0 │ 13 Oct 25 22:01 UTC │ 13 Oct 25 22:01 UTC │
	│ start   │ -p no-preload-080337 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-080337        │ jenkins │ v1.37.0 │ 13 Oct 25 22:01 UTC │ 13 Oct 25 22:02 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-534822 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-534822   │ jenkins │ v1.37.0 │ 13 Oct 25 22:01 UTC │                     │
	│ stop    │ -p old-k8s-version-534822 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-534822   │ jenkins │ v1.37.0 │ 13 Oct 25 22:01 UTC │ 13 Oct 25 22:02 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-534822 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-534822   │ jenkins │ v1.37.0 │ 13 Oct 25 22:02 UTC │ 13 Oct 25 22:02 UTC │
	│ start   │ -p old-k8s-version-534822 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-534822   │ jenkins │ v1.37.0 │ 13 Oct 25 22:02 UTC │ 13 Oct 25 22:02 UTC │
	│ addons  │ enable metrics-server -p no-preload-080337 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-080337        │ jenkins │ v1.37.0 │ 13 Oct 25 22:02 UTC │                     │
	│ stop    │ -p no-preload-080337 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-080337        │ jenkins │ v1.37.0 │ 13 Oct 25 22:02 UTC │ 13 Oct 25 22:02 UTC │
	│ addons  │ enable dashboard -p no-preload-080337 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-080337        │ jenkins │ v1.37.0 │ 13 Oct 25 22:02 UTC │ 13 Oct 25 22:02 UTC │
	│ start   │ -p no-preload-080337 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-080337        │ jenkins │ v1.37.0 │ 13 Oct 25 22:02 UTC │                     │
	│ image   │ old-k8s-version-534822 image list --format=json                                                                                                                                                                                               │ old-k8s-version-534822   │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │ 13 Oct 25 22:03 UTC │
	│ pause   │ -p old-k8s-version-534822 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-534822   │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────────
───┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 22:02:31
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 22:02:31.471237  468497 out.go:360] Setting OutFile to fd 1 ...
	I1013 22:02:31.471393  468497 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:02:31.471404  468497 out.go:374] Setting ErrFile to fd 2...
	I1013 22:02:31.471410  468497 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:02:31.471706  468497 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-226873/.minikube/bin
	I1013 22:02:31.472379  468497 out.go:368] Setting JSON to false
	I1013 22:02:31.474142  468497 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6299,"bootTime":1760386652,"procs":463,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1013 22:02:31.474322  468497 start.go:141] virtualization: kvm guest
	I1013 22:02:31.476673  468497 out.go:179] * [no-preload-080337] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1013 22:02:31.478186  468497 notify.go:220] Checking for updates...
	I1013 22:02:31.478350  468497 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 22:02:31.480566  468497 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 22:02:31.482235  468497 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-226873/kubeconfig
	I1013 22:02:31.483713  468497 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-226873/.minikube
	I1013 22:02:31.485120  468497 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1013 22:02:31.486506  468497 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 22:02:27.939179  410447 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1013 22:02:27.939632  410447 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1013 22:02:27.939690  410447 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1013 22:02:27.939763  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1013 22:02:27.968496  410447 cri.go:89] found id: "7c043a9cc58769edd4f73332ec4ad2ab1ede59a8e9a0dbb54ce84bce3b265d77"
	I1013 22:02:27.968522  410447 cri.go:89] found id: ""
	I1013 22:02:27.968532  410447 logs.go:282] 1 containers: [7c043a9cc58769edd4f73332ec4ad2ab1ede59a8e9a0dbb54ce84bce3b265d77]
	I1013 22:02:27.968594  410447 ssh_runner.go:195] Run: which crictl
	I1013 22:02:27.974115  410447 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1013 22:02:27.974203  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1013 22:02:28.004361  410447 cri.go:89] found id: ""
	I1013 22:02:28.004391  410447 logs.go:282] 0 containers: []
	W1013 22:02:28.004407  410447 logs.go:284] No container was found matching "etcd"
	I1013 22:02:28.004415  410447 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1013 22:02:28.004475  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1013 22:02:28.033143  410447 cri.go:89] found id: ""
	I1013 22:02:28.033168  410447 logs.go:282] 0 containers: []
	W1013 22:02:28.033179  410447 logs.go:284] No container was found matching "coredns"
	I1013 22:02:28.033187  410447 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1013 22:02:28.033245  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1013 22:02:28.062216  410447 cri.go:89] found id: "d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110"
	I1013 22:02:28.062243  410447 cri.go:89] found id: ""
	I1013 22:02:28.062252  410447 logs.go:282] 1 containers: [d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110]
	I1013 22:02:28.062306  410447 ssh_runner.go:195] Run: which crictl
	I1013 22:02:28.066433  410447 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1013 22:02:28.066495  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1013 22:02:28.093431  410447 cri.go:89] found id: ""
	I1013 22:02:28.093462  410447 logs.go:282] 0 containers: []
	W1013 22:02:28.093470  410447 logs.go:284] No container was found matching "kube-proxy"
	I1013 22:02:28.093477  410447 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1013 22:02:28.093528  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1013 22:02:28.122775  410447 cri.go:89] found id: "f79454099a1733816a4fe6edbc16102f58fa4c29e3f23bb4dec43bbd7666639e"
	I1013 22:02:28.122804  410447 cri.go:89] found id: "6bb08d6a13040e205988dc98d566c74f87d564ee6fc94f5c64022735afe6609d"
	I1013 22:02:28.122814  410447 cri.go:89] found id: ""
	I1013 22:02:28.122825  410447 logs.go:282] 2 containers: [f79454099a1733816a4fe6edbc16102f58fa4c29e3f23bb4dec43bbd7666639e 6bb08d6a13040e205988dc98d566c74f87d564ee6fc94f5c64022735afe6609d]
	I1013 22:02:28.122887  410447 ssh_runner.go:195] Run: which crictl
	I1013 22:02:28.127868  410447 ssh_runner.go:195] Run: which crictl
	I1013 22:02:28.132814  410447 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1013 22:02:28.132881  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1013 22:02:28.166079  410447 cri.go:89] found id: ""
	I1013 22:02:28.166109  410447 logs.go:282] 0 containers: []
	W1013 22:02:28.166120  410447 logs.go:284] No container was found matching "kindnet"
	I1013 22:02:28.166128  410447 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1013 22:02:28.166185  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1013 22:02:28.203141  410447 cri.go:89] found id: ""
	I1013 22:02:28.203173  410447 logs.go:282] 0 containers: []
	W1013 22:02:28.203184  410447 logs.go:284] No container was found matching "storage-provisioner"
	I1013 22:02:28.203201  410447 logs.go:123] Gathering logs for CRI-O ...
	I1013 22:02:28.203215  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1013 22:02:28.266242  410447 logs.go:123] Gathering logs for container status ...
	I1013 22:02:28.266276  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1013 22:02:28.300859  410447 logs.go:123] Gathering logs for kubelet ...
	I1013 22:02:28.300893  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1013 22:02:28.401887  410447 logs.go:123] Gathering logs for dmesg ...
	I1013 22:02:28.401922  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1013 22:02:28.418246  410447 logs.go:123] Gathering logs for kube-controller-manager [f79454099a1733816a4fe6edbc16102f58fa4c29e3f23bb4dec43bbd7666639e] ...
	I1013 22:02:28.418280  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f79454099a1733816a4fe6edbc16102f58fa4c29e3f23bb4dec43bbd7666639e"
	I1013 22:02:28.446367  410447 logs.go:123] Gathering logs for describe nodes ...
	I1013 22:02:28.446396  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1013 22:02:28.504377  410447 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1013 22:02:28.504403  410447 logs.go:123] Gathering logs for kube-apiserver [7c043a9cc58769edd4f73332ec4ad2ab1ede59a8e9a0dbb54ce84bce3b265d77] ...
	I1013 22:02:28.504421  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7c043a9cc58769edd4f73332ec4ad2ab1ede59a8e9a0dbb54ce84bce3b265d77"
	I1013 22:02:28.541429  410447 logs.go:123] Gathering logs for kube-scheduler [d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110] ...
	I1013 22:02:28.541473  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110"
	I1013 22:02:28.602237  410447 logs.go:123] Gathering logs for kube-controller-manager [6bb08d6a13040e205988dc98d566c74f87d564ee6fc94f5c64022735afe6609d] ...
	I1013 22:02:28.602275  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6bb08d6a13040e205988dc98d566c74f87d564ee6fc94f5c64022735afe6609d"
	I1013 22:02:31.136084  410447 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1013 22:02:31.136556  410447 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1013 22:02:31.136622  410447 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1013 22:02:31.136693  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1013 22:02:31.175750  410447 cri.go:89] found id: "7c043a9cc58769edd4f73332ec4ad2ab1ede59a8e9a0dbb54ce84bce3b265d77"
	I1013 22:02:31.175774  410447 cri.go:89] found id: ""
	I1013 22:02:31.175785  410447 logs.go:282] 1 containers: [7c043a9cc58769edd4f73332ec4ad2ab1ede59a8e9a0dbb54ce84bce3b265d77]
	I1013 22:02:31.175842  410447 ssh_runner.go:195] Run: which crictl
	I1013 22:02:31.181243  410447 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1013 22:02:31.181317  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1013 22:02:31.217498  410447 cri.go:89] found id: ""
	I1013 22:02:31.217603  410447 logs.go:282] 0 containers: []
	W1013 22:02:31.217622  410447 logs.go:284] No container was found matching "etcd"
	I1013 22:02:31.217633  410447 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1013 22:02:31.217696  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1013 22:02:31.255434  410447 cri.go:89] found id: ""
	I1013 22:02:31.255459  410447 logs.go:282] 0 containers: []
	W1013 22:02:31.255469  410447 logs.go:284] No container was found matching "coredns"
	I1013 22:02:31.255480  410447 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1013 22:02:31.255533  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1013 22:02:31.293633  410447 cri.go:89] found id: "d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110"
	I1013 22:02:31.293653  410447 cri.go:89] found id: ""
	I1013 22:02:31.293661  410447 logs.go:282] 1 containers: [d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110]
	I1013 22:02:31.293712  410447 ssh_runner.go:195] Run: which crictl
	I1013 22:02:31.299383  410447 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1013 22:02:31.299455  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1013 22:02:31.331790  410447 cri.go:89] found id: ""
	I1013 22:02:31.331818  410447 logs.go:282] 0 containers: []
	W1013 22:02:31.331829  410447 logs.go:284] No container was found matching "kube-proxy"
	I1013 22:02:31.331836  410447 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1013 22:02:31.331904  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1013 22:02:31.364562  410447 cri.go:89] found id: "f79454099a1733816a4fe6edbc16102f58fa4c29e3f23bb4dec43bbd7666639e"
	I1013 22:02:31.364585  410447 cri.go:89] found id: "6bb08d6a13040e205988dc98d566c74f87d564ee6fc94f5c64022735afe6609d"
	I1013 22:02:31.364591  410447 cri.go:89] found id: ""
	I1013 22:02:31.364600  410447 logs.go:282] 2 containers: [f79454099a1733816a4fe6edbc16102f58fa4c29e3f23bb4dec43bbd7666639e 6bb08d6a13040e205988dc98d566c74f87d564ee6fc94f5c64022735afe6609d]
	I1013 22:02:31.364652  410447 ssh_runner.go:195] Run: which crictl
	I1013 22:02:31.369378  410447 ssh_runner.go:195] Run: which crictl
	I1013 22:02:31.374121  410447 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1013 22:02:31.374195  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1013 22:02:31.408080  410447 cri.go:89] found id: ""
	I1013 22:02:31.408111  410447 logs.go:282] 0 containers: []
	W1013 22:02:31.408123  410447 logs.go:284] No container was found matching "kindnet"
	I1013 22:02:31.408131  410447 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1013 22:02:31.408194  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1013 22:02:31.449875  410447 cri.go:89] found id: ""
	I1013 22:02:31.449902  410447 logs.go:282] 0 containers: []
	W1013 22:02:31.449911  410447 logs.go:284] No container was found matching "storage-provisioner"
	I1013 22:02:31.449930  410447 logs.go:123] Gathering logs for CRI-O ...
	I1013 22:02:31.449945  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1013 22:02:31.488630  468497 config.go:182] Loaded profile config "no-preload-080337": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:02:31.489368  468497 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 22:02:31.520646  468497 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1013 22:02:31.520769  468497 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 22:02:31.600165  468497 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-13 22:02:31.586060702 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652166656 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1013 22:02:31.600358  468497 docker.go:318] overlay module found
	I1013 22:02:31.602339  468497 out.go:179] * Using the docker driver based on existing profile
	I1013 22:02:31.604077  468497 start.go:305] selected driver: docker
	I1013 22:02:31.604096  468497 start.go:925] validating driver "docker" against &{Name:no-preload-080337 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-080337 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 22:02:31.604221  468497 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 22:02:31.604986  468497 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 22:02:31.675597  468497 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-13 22:02:31.664661789 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652166656 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1013 22:02:31.676069  468497 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 22:02:31.676109  468497 cni.go:84] Creating CNI manager for ""
	I1013 22:02:31.676176  468497 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 22:02:31.676233  468497 start.go:349] cluster config:
	{Name:no-preload-080337 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-080337 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 22:02:31.678877  468497 out.go:179] * Starting "no-preload-080337" primary control-plane node in "no-preload-080337" cluster
	I1013 22:02:31.680122  468497 cache.go:123] Beginning downloading kic base image for docker with crio
	I1013 22:02:31.681455  468497 out.go:179] * Pulling base image v0.0.48-1760363564-21724 ...
	I1013 22:02:31.682661  468497 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 22:02:31.682795  468497 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon
	I1013 22:02:31.682856  468497 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/no-preload-080337/config.json ...
	I1013 22:02:31.683025  468497 cache.go:107] acquiring lock: {Name:mk22a9364551c6b5c8c880eceb2cdd611b51da2f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 22:02:31.683096  468497 cache.go:107] acquiring lock: {Name:mk9154227203ad745e43a6293d5e771c17558feb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 22:02:31.683142  468497 cache.go:115] /home/jenkins/minikube-integration/21724-226873/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1013 22:02:31.683153  468497 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21724-226873/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 184.747µs
	I1013 22:02:31.683114  468497 cache.go:107] acquiring lock: {Name:mk6931ad5aa94faa6a047c26bd9f08eca07726d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 22:02:31.683153  468497 cache.go:107] acquiring lock: {Name:mk6d91f6f2b8cc9ae34afd3116b942c4c3dc11bd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 22:02:31.683170  468497 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21724-226873/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1013 22:02:31.683180  468497 cache.go:115] /home/jenkins/minikube-integration/21724-226873/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1013 22:02:31.683184  468497 cache.go:107] acquiring lock: {Name:mkf399189dc414297ba076f45e34ea1ae863ef3d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 22:02:31.683201  468497 cache.go:115] /home/jenkins/minikube-integration/21724-226873/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1013 22:02:31.683190  468497 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21724-226873/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 105.207µs
	I1013 22:02:31.683209  468497 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21724-226873/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1013 22:02:31.683209  468497 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21724-226873/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 63.159µs
	I1013 22:02:31.683217  468497 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21724-226873/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1013 22:02:31.683215  468497 cache.go:107] acquiring lock: {Name:mk6019ad9dabd5e086757fd62cea931cca589008 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 22:02:31.683226  468497 cache.go:107] acquiring lock: {Name:mk6044d54e95581671b8d12eb16ba7154be9e4ae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 22:02:31.683218  468497 cache.go:107] acquiring lock: {Name:mk4a3ac78b285b903bf7de76f6d114f2486eff4b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 22:02:31.683274  468497 cache.go:115] /home/jenkins/minikube-integration/21724-226873/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1013 22:02:31.683282  468497 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21724-226873/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 68.318µs
	I1013 22:02:31.683278  468497 cache.go:115] /home/jenkins/minikube-integration/21724-226873/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1013 22:02:31.683296  468497 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21724-226873/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1013 22:02:31.683312  468497 cache.go:115] /home/jenkins/minikube-integration/21724-226873/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1013 22:02:31.683305  468497 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21724-226873/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 120.768µs
	I1013 22:02:31.683354  468497 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21724-226873/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1013 22:02:31.683354  468497 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21724-226873/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 298.702µs
	I1013 22:02:31.683365  468497 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21724-226873/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1013 22:02:31.683359  468497 cache.go:115] /home/jenkins/minikube-integration/21724-226873/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1013 22:02:31.683380  468497 cache.go:115] /home/jenkins/minikube-integration/21724-226873/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1013 22:02:31.683384  468497 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21724-226873/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 154.518µs
	I1013 22:02:31.683398  468497 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21724-226873/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 231.982µs
	I1013 22:02:31.683409  468497 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21724-226873/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1013 22:02:31.683415  468497 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21724-226873/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1013 22:02:31.683426  468497 cache.go:87] Successfully saved all images to host disk.
	I1013 22:02:31.707317  468497 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon, skipping pull
	I1013 22:02:31.707343  468497 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 exists in daemon, skipping load
	I1013 22:02:31.707365  468497 cache.go:232] Successfully downloaded all kic artifacts
	I1013 22:02:31.707406  468497 start.go:360] acquireMachinesLock for no-preload-080337: {Name:mk2bf55649fb50a9c6baaf8b730c64cf9325030f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 22:02:31.707478  468497 start.go:364] duration metric: took 49.522µs to acquireMachinesLock for "no-preload-080337"
	I1013 22:02:31.707503  468497 start.go:96] Skipping create...Using existing machine configuration
	I1013 22:02:31.707513  468497 fix.go:54] fixHost starting: 
	I1013 22:02:31.707865  468497 cli_runner.go:164] Run: docker container inspect no-preload-080337 --format={{.State.Status}}
	I1013 22:02:31.731535  468497 fix.go:112] recreateIfNeeded on no-preload-080337: state=Stopped err=<nil>
	W1013 22:02:31.731582  468497 fix.go:138] unexpected machine state, will restart: <nil>
	W1013 22:02:28.023079  464437 pod_ready.go:104] pod "coredns-5dd5756b68-wx29h" is not "Ready", error: <nil>
	W1013 22:02:30.024394  464437 pod_ready.go:104] pod "coredns-5dd5756b68-wx29h" is not "Ready", error: <nil>
	I1013 22:02:31.734967  468497 out.go:252] * Restarting existing docker container for "no-preload-080337" ...
	I1013 22:02:31.735072  468497 cli_runner.go:164] Run: docker start no-preload-080337
	I1013 22:02:32.046327  468497 cli_runner.go:164] Run: docker container inspect no-preload-080337 --format={{.State.Status}}
	I1013 22:02:32.068092  468497 kic.go:430] container "no-preload-080337" state is running.
	I1013 22:02:32.068563  468497 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-080337
	I1013 22:02:32.092377  468497 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/no-preload-080337/config.json ...
	I1013 22:02:32.092678  468497 machine.go:93] provisionDockerMachine start ...
	I1013 22:02:32.092787  468497 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-080337
	I1013 22:02:32.116578  468497 main.go:141] libmachine: Using SSH client type: native
	I1013 22:02:32.116906  468497 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1013 22:02:32.116920  468497 main.go:141] libmachine: About to run SSH command:
	hostname
	I1013 22:02:32.117575  468497 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54766->127.0.0.1:33068: read: connection reset by peer
	I1013 22:02:35.274738  468497 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-080337
	
	I1013 22:02:35.274770  468497 ubuntu.go:182] provisioning hostname "no-preload-080337"
	I1013 22:02:35.274854  468497 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-080337
	I1013 22:02:35.294473  468497 main.go:141] libmachine: Using SSH client type: native
	I1013 22:02:35.294806  468497 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1013 22:02:35.294830  468497 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-080337 && echo "no-preload-080337" | sudo tee /etc/hostname
	I1013 22:02:35.447851  468497 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-080337
	
	I1013 22:02:35.447943  468497 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-080337
	I1013 22:02:35.468905  468497 main.go:141] libmachine: Using SSH client type: native
	I1013 22:02:35.469246  468497 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1013 22:02:35.469285  468497 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-080337' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-080337/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-080337' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1013 22:02:35.613075  468497 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 22:02:35.613108  468497 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21724-226873/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-226873/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-226873/.minikube}
	I1013 22:02:35.613126  468497 ubuntu.go:190] setting up certificates
	I1013 22:02:35.613136  468497 provision.go:84] configureAuth start
	I1013 22:02:35.613189  468497 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-080337
	I1013 22:02:35.631293  468497 provision.go:143] copyHostCerts
	I1013 22:02:35.631347  468497 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-226873/.minikube/cert.pem, removing ...
	I1013 22:02:35.631364  468497 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-226873/.minikube/cert.pem
	I1013 22:02:35.631431  468497 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-226873/.minikube/cert.pem (1123 bytes)
	I1013 22:02:35.631554  468497 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-226873/.minikube/key.pem, removing ...
	I1013 22:02:35.631565  468497 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-226873/.minikube/key.pem
	I1013 22:02:35.631604  468497 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-226873/.minikube/key.pem (1679 bytes)
	I1013 22:02:35.631750  468497 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-226873/.minikube/ca.pem, removing ...
	I1013 22:02:35.631769  468497 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-226873/.minikube/ca.pem
	I1013 22:02:35.631808  468497 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-226873/.minikube/ca.pem (1078 bytes)
	I1013 22:02:35.631903  468497 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-226873/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca-key.pem org=jenkins.no-preload-080337 san=[127.0.0.1 192.168.94.2 localhost minikube no-preload-080337]
	I1013 22:02:35.820584  468497 provision.go:177] copyRemoteCerts
	I1013 22:02:35.820647  468497 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1013 22:02:35.820684  468497 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-080337
	I1013 22:02:35.839016  468497 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/no-preload-080337/id_rsa Username:docker}
	I1013 22:02:35.938036  468497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1013 22:02:35.956605  468497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1013 22:02:35.974619  468497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1013 22:02:35.993177  468497 provision.go:87] duration metric: took 380.027557ms to configureAuth
	I1013 22:02:35.993210  468497 ubuntu.go:206] setting minikube options for container-runtime
	I1013 22:02:35.993377  468497 config.go:182] Loaded profile config "no-preload-080337": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:02:35.993471  468497 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-080337
	I1013 22:02:36.012275  468497 main.go:141] libmachine: Using SSH client type: native
	I1013 22:02:36.012496  468497 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1013 22:02:36.012514  468497 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1013 22:02:31.540597  410447 logs.go:123] Gathering logs for dmesg ...
	I1013 22:02:31.540640  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1013 22:02:31.563542  410447 logs.go:123] Gathering logs for kube-scheduler [d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110] ...
	I1013 22:02:31.563578  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110"
	I1013 22:02:31.654504  410447 logs.go:123] Gathering logs for kube-controller-manager [f79454099a1733816a4fe6edbc16102f58fa4c29e3f23bb4dec43bbd7666639e] ...
	I1013 22:02:31.654555  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f79454099a1733816a4fe6edbc16102f58fa4c29e3f23bb4dec43bbd7666639e"
	I1013 22:02:31.686515  410447 logs.go:123] Gathering logs for container status ...
	I1013 22:02:31.686544  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1013 22:02:31.726071  410447 logs.go:123] Gathering logs for kubelet ...
	I1013 22:02:31.726109  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1013 22:02:31.853611  410447 logs.go:123] Gathering logs for describe nodes ...
	I1013 22:02:31.853662  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1013 22:02:31.936619  410447 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1013 22:02:31.936652  410447 logs.go:123] Gathering logs for kube-apiserver [7c043a9cc58769edd4f73332ec4ad2ab1ede59a8e9a0dbb54ce84bce3b265d77] ...
	I1013 22:02:31.936669  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7c043a9cc58769edd4f73332ec4ad2ab1ede59a8e9a0dbb54ce84bce3b265d77"
	I1013 22:02:31.981261  410447 logs.go:123] Gathering logs for kube-controller-manager [6bb08d6a13040e205988dc98d566c74f87d564ee6fc94f5c64022735afe6609d] ...
	I1013 22:02:31.981310  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6bb08d6a13040e205988dc98d566c74f87d564ee6fc94f5c64022735afe6609d"
	I1013 22:02:34.520059  410447 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1013 22:02:34.520500  410447 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1013 22:02:34.520562  410447 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1013 22:02:34.520620  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1013 22:02:34.552419  410447 cri.go:89] found id: "7c043a9cc58769edd4f73332ec4ad2ab1ede59a8e9a0dbb54ce84bce3b265d77"
	I1013 22:02:34.552442  410447 cri.go:89] found id: ""
	I1013 22:02:34.552451  410447 logs.go:282] 1 containers: [7c043a9cc58769edd4f73332ec4ad2ab1ede59a8e9a0dbb54ce84bce3b265d77]
	I1013 22:02:34.552498  410447 ssh_runner.go:195] Run: which crictl
	I1013 22:02:34.556864  410447 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1013 22:02:34.556930  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1013 22:02:34.585919  410447 cri.go:89] found id: ""
	I1013 22:02:34.585950  410447 logs.go:282] 0 containers: []
	W1013 22:02:34.585962  410447 logs.go:284] No container was found matching "etcd"
	I1013 22:02:34.585969  410447 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1013 22:02:34.586053  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1013 22:02:34.618473  410447 cri.go:89] found id: ""
	I1013 22:02:34.618500  410447 logs.go:282] 0 containers: []
	W1013 22:02:34.618508  410447 logs.go:284] No container was found matching "coredns"
	I1013 22:02:34.618513  410447 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1013 22:02:34.618561  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1013 22:02:34.649147  410447 cri.go:89] found id: "d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110"
	I1013 22:02:34.649174  410447 cri.go:89] found id: ""
	I1013 22:02:34.649186  410447 logs.go:282] 1 containers: [d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110]
	I1013 22:02:34.649246  410447 ssh_runner.go:195] Run: which crictl
	I1013 22:02:34.653772  410447 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1013 22:02:34.653854  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1013 22:02:34.685324  410447 cri.go:89] found id: ""
	I1013 22:02:34.685358  410447 logs.go:282] 0 containers: []
	W1013 22:02:34.685369  410447 logs.go:284] No container was found matching "kube-proxy"
	I1013 22:02:34.685378  410447 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1013 22:02:34.685439  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1013 22:02:34.718761  410447 cri.go:89] found id: "f79454099a1733816a4fe6edbc16102f58fa4c29e3f23bb4dec43bbd7666639e"
	I1013 22:02:34.718793  410447 cri.go:89] found id: ""
	I1013 22:02:34.718805  410447 logs.go:282] 1 containers: [f79454099a1733816a4fe6edbc16102f58fa4c29e3f23bb4dec43bbd7666639e]
	I1013 22:02:34.718877  410447 ssh_runner.go:195] Run: which crictl
	I1013 22:02:34.723698  410447 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1013 22:02:34.723770  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1013 22:02:34.759630  410447 cri.go:89] found id: ""
	I1013 22:02:34.759659  410447 logs.go:282] 0 containers: []
	W1013 22:02:34.759670  410447 logs.go:284] No container was found matching "kindnet"
	I1013 22:02:34.759677  410447 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1013 22:02:34.759749  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1013 22:02:34.794677  410447 cri.go:89] found id: ""
	I1013 22:02:34.794706  410447 logs.go:282] 0 containers: []
	W1013 22:02:34.794743  410447 logs.go:284] No container was found matching "storage-provisioner"
	I1013 22:02:34.794759  410447 logs.go:123] Gathering logs for kube-controller-manager [f79454099a1733816a4fe6edbc16102f58fa4c29e3f23bb4dec43bbd7666639e] ...
	I1013 22:02:34.794778  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f79454099a1733816a4fe6edbc16102f58fa4c29e3f23bb4dec43bbd7666639e"
	I1013 22:02:34.830418  410447 logs.go:123] Gathering logs for CRI-O ...
	I1013 22:02:34.830452  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1013 22:02:34.909408  410447 logs.go:123] Gathering logs for container status ...
	I1013 22:02:34.909454  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1013 22:02:34.950704  410447 logs.go:123] Gathering logs for kubelet ...
	I1013 22:02:34.950750  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1013 22:02:35.077126  410447 logs.go:123] Gathering logs for dmesg ...
	I1013 22:02:35.077165  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1013 22:02:35.099151  410447 logs.go:123] Gathering logs for describe nodes ...
	I1013 22:02:35.099192  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1013 22:02:35.174469  410447 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1013 22:02:35.174498  410447 logs.go:123] Gathering logs for kube-apiserver [7c043a9cc58769edd4f73332ec4ad2ab1ede59a8e9a0dbb54ce84bce3b265d77] ...
	I1013 22:02:35.174514  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7c043a9cc58769edd4f73332ec4ad2ab1ede59a8e9a0dbb54ce84bce3b265d77"
	I1013 22:02:35.216269  410447 logs.go:123] Gathering logs for kube-scheduler [d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110] ...
	I1013 22:02:35.216305  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110"
	W1013 22:02:32.522637  464437 pod_ready.go:104] pod "coredns-5dd5756b68-wx29h" is not "Ready", error: <nil>
	W1013 22:02:34.523377  464437 pod_ready.go:104] pod "coredns-5dd5756b68-wx29h" is not "Ready", error: <nil>
	W1013 22:02:37.022384  464437 pod_ready.go:104] pod "coredns-5dd5756b68-wx29h" is not "Ready", error: <nil>
	I1013 22:02:37.120278  468497 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1013 22:02:37.120305  468497 machine.go:96] duration metric: took 5.027607029s to provisionDockerMachine
	I1013 22:02:37.120337  468497 start.go:293] postStartSetup for "no-preload-080337" (driver="docker")
	I1013 22:02:37.120351  468497 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1013 22:02:37.120425  468497 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1013 22:02:37.120483  468497 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-080337
	I1013 22:02:37.141151  468497 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/no-preload-080337/id_rsa Username:docker}
	I1013 22:02:37.241132  468497 ssh_runner.go:195] Run: cat /etc/os-release
	I1013 22:02:37.245022  468497 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1013 22:02:37.245056  468497 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1013 22:02:37.245067  468497 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-226873/.minikube/addons for local assets ...
	I1013 22:02:37.245118  468497 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-226873/.minikube/files for local assets ...
	I1013 22:02:37.245187  468497 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-226873/.minikube/files/etc/ssl/certs/2309292.pem -> 2309292.pem in /etc/ssl/certs
	I1013 22:02:37.245278  468497 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1013 22:02:37.253432  468497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/files/etc/ssl/certs/2309292.pem --> /etc/ssl/certs/2309292.pem (1708 bytes)
	I1013 22:02:37.271729  468497 start.go:296] duration metric: took 151.372038ms for postStartSetup
	I1013 22:02:37.271821  468497 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1013 22:02:37.271864  468497 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-080337
	I1013 22:02:37.290561  468497 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/no-preload-080337/id_rsa Username:docker}
	I1013 22:02:37.386482  468497 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1013 22:02:37.391286  468497 fix.go:56] duration metric: took 5.683766406s for fixHost
	I1013 22:02:37.391312  468497 start.go:83] releasing machines lock for "no-preload-080337", held for 5.683821416s
	I1013 22:02:37.391392  468497 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-080337
	I1013 22:02:37.409126  468497 ssh_runner.go:195] Run: cat /version.json
	I1013 22:02:37.409169  468497 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-080337
	I1013 22:02:37.409253  468497 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1013 22:02:37.409325  468497 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-080337
	I1013 22:02:37.429184  468497 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/no-preload-080337/id_rsa Username:docker}
	I1013 22:02:37.429389  468497 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/no-preload-080337/id_rsa Username:docker}
	I1013 22:02:37.524565  468497 ssh_runner.go:195] Run: systemctl --version
	I1013 22:02:37.582041  468497 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1013 22:02:37.619890  468497 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1013 22:02:37.624908  468497 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1013 22:02:37.624965  468497 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1013 22:02:37.633166  468497 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1013 22:02:37.633192  468497 start.go:495] detecting cgroup driver to use...
	I1013 22:02:37.633223  468497 detect.go:190] detected "systemd" cgroup driver on host os
	I1013 22:02:37.633271  468497 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1013 22:02:37.651748  468497 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1013 22:02:37.664908  468497 docker.go:218] disabling cri-docker service (if available) ...
	I1013 22:02:37.664967  468497 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1013 22:02:37.680417  468497 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1013 22:02:37.693805  468497 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1013 22:02:37.776202  468497 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1013 22:02:37.872640  468497 docker.go:234] disabling docker service ...
	I1013 22:02:37.872709  468497 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1013 22:02:37.888789  468497 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1013 22:02:37.902485  468497 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1013 22:02:37.998756  468497 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1013 22:02:38.098903  468497 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1013 22:02:38.112966  468497 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1013 22:02:38.128217  468497 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1013 22:02:38.128269  468497 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:02:38.138233  468497 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1013 22:02:38.138291  468497 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:02:38.149177  468497 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:02:38.159153  468497 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:02:38.169925  468497 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1013 22:02:38.178761  468497 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:02:38.188570  468497 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:02:38.197599  468497 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:02:38.206825  468497 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1013 22:02:38.214598  468497 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1013 22:02:38.222934  468497 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:02:38.308216  468497 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1013 22:02:38.430456  468497 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1013 22:02:38.430528  468497 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1013 22:02:38.434599  468497 start.go:563] Will wait 60s for crictl version
	I1013 22:02:38.434655  468497 ssh_runner.go:195] Run: which crictl
	I1013 22:02:38.438560  468497 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1013 22:02:38.464619  468497 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1013 22:02:38.464699  468497 ssh_runner.go:195] Run: crio --version
	I1013 22:02:38.493464  468497 ssh_runner.go:195] Run: crio --version
	I1013 22:02:38.526603  468497 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1013 22:02:38.527886  468497 cli_runner.go:164] Run: docker network inspect no-preload-080337 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 22:02:38.546159  468497 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1013 22:02:38.550397  468497 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 22:02:38.561290  468497 kubeadm.go:883] updating cluster {Name:no-preload-080337 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-080337 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1013 22:02:38.561402  468497 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 22:02:38.561446  468497 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 22:02:38.594978  468497 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 22:02:38.595027  468497 cache_images.go:85] Images are preloaded, skipping loading
	I1013 22:02:38.595038  468497 kubeadm.go:934] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1013 22:02:38.595152  468497 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-080337 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-080337 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1013 22:02:38.595237  468497 ssh_runner.go:195] Run: crio config
	I1013 22:02:38.645613  468497 cni.go:84] Creating CNI manager for ""
	I1013 22:02:38.645638  468497 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 22:02:38.645656  468497 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1013 22:02:38.645685  468497 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-080337 NodeName:no-preload-080337 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1013 22:02:38.645885  468497 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-080337"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1013 22:02:38.645965  468497 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1013 22:02:38.655593  468497 binaries.go:44] Found k8s binaries, skipping transfer
	I1013 22:02:38.655667  468497 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1013 22:02:38.663956  468497 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1013 22:02:38.677255  468497 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1013 22:02:38.690164  468497 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1013 22:02:38.703395  468497 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1013 22:02:38.707497  468497 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 22:02:38.717846  468497 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:02:38.796608  468497 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 22:02:38.823684  468497 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/no-preload-080337 for IP: 192.168.94.2
	I1013 22:02:38.823706  468497 certs.go:195] generating shared ca certs ...
	I1013 22:02:38.823722  468497 certs.go:227] acquiring lock for ca certs: {Name:mk5abdb742abbab05bc35d961f97579f54806d56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:02:38.823887  468497 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-226873/.minikube/ca.key
	I1013 22:02:38.823961  468497 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-226873/.minikube/proxy-client-ca.key
	I1013 22:02:38.824018  468497 certs.go:257] generating profile certs ...
	I1013 22:02:38.824143  468497 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/no-preload-080337/client.key
	I1013 22:02:38.824224  468497 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/no-preload-080337/apiserver.key.7644baed
	I1013 22:02:38.824272  468497 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/no-preload-080337/proxy-client.key
	I1013 22:02:38.824413  468497 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/230929.pem (1338 bytes)
	W1013 22:02:38.824466  468497 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-226873/.minikube/certs/230929_empty.pem, impossibly tiny 0 bytes
	I1013 22:02:38.824480  468497 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca-key.pem (1675 bytes)
	I1013 22:02:38.824511  468497 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem (1078 bytes)
	I1013 22:02:38.824545  468497 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/cert.pem (1123 bytes)
	I1013 22:02:38.824576  468497 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/key.pem (1679 bytes)
	I1013 22:02:38.824628  468497 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/files/etc/ssl/certs/2309292.pem (1708 bytes)
	I1013 22:02:38.825335  468497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1013 22:02:38.844928  468497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1013 22:02:38.863870  468497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1013 22:02:38.883146  468497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1013 22:02:38.907819  468497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/no-preload-080337/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1013 22:02:38.927138  468497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/no-preload-080337/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1013 22:02:38.945747  468497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/no-preload-080337/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1013 22:02:38.963701  468497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/no-preload-080337/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1013 22:02:38.982160  468497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1013 22:02:38.999878  468497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/certs/230929.pem --> /usr/share/ca-certificates/230929.pem (1338 bytes)
	I1013 22:02:39.018840  468497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/files/etc/ssl/certs/2309292.pem --> /usr/share/ca-certificates/2309292.pem (1708 bytes)
	I1013 22:02:39.036983  468497 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1013 22:02:39.050544  468497 ssh_runner.go:195] Run: openssl version
	I1013 22:02:39.057278  468497 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2309292.pem && ln -fs /usr/share/ca-certificates/2309292.pem /etc/ssl/certs/2309292.pem"
	I1013 22:02:39.065789  468497 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2309292.pem
	I1013 22:02:39.069636  468497 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 13 21:24 /usr/share/ca-certificates/2309292.pem
	I1013 22:02:39.069695  468497 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2309292.pem
	I1013 22:02:39.106167  468497 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2309292.pem /etc/ssl/certs/3ec20f2e.0"
	I1013 22:02:39.115089  468497 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1013 22:02:39.123426  468497 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:02:39.127445  468497 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 13 21:18 /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:02:39.127496  468497 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:02:39.162324  468497 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1013 22:02:39.170974  468497 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/230929.pem && ln -fs /usr/share/ca-certificates/230929.pem /etc/ssl/certs/230929.pem"
	I1013 22:02:39.179879  468497 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/230929.pem
	I1013 22:02:39.183767  468497 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 13 21:24 /usr/share/ca-certificates/230929.pem
	I1013 22:02:39.183821  468497 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/230929.pem
	I1013 22:02:39.219298  468497 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/230929.pem /etc/ssl/certs/51391683.0"
	I1013 22:02:39.228352  468497 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1013 22:02:39.232367  468497 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1013 22:02:39.267344  468497 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1013 22:02:39.302745  468497 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1013 22:02:39.348393  468497 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1013 22:02:39.393456  468497 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1013 22:02:39.443337  468497 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1013 22:02:39.496303  468497 kubeadm.go:400] StartCluster: {Name:no-preload-080337 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-080337 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 22:02:39.496456  468497 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 22:02:39.496570  468497 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 22:02:39.531183  468497 cri.go:89] found id: "148f0bcacf55a43101a10f115e851d44747ab0b0f8fa14a67c8e9715dc66844d"
	I1013 22:02:39.531206  468497 cri.go:89] found id: "db978d7166395383320a2b2c9c28bf365b3b1253da4d608cc691cb890c27b32f"
	I1013 22:02:39.531211  468497 cri.go:89] found id: "3f85644ea5a0b267c7fc78009aa5bfd8d8247edbf9e2e04243d0da00d40977e5"
	I1013 22:02:39.531278  468497 cri.go:89] found id: "09313475387f6d9193c4369e317fc1d49a163fc8159f82148fea73cd3e610424"
	I1013 22:02:39.531283  468497 cri.go:89] found id: ""
	I1013 22:02:39.531337  468497 ssh_runner.go:195] Run: sudo runc list -f json
	W1013 22:02:39.544321  468497 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:02:39Z" level=error msg="open /run/runc: no such file or directory"
	I1013 22:02:39.544396  468497 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1013 22:02:39.552777  468497 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1013 22:02:39.552798  468497 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1013 22:02:39.552848  468497 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1013 22:02:39.560648  468497 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1013 22:02:39.561660  468497 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-080337" does not appear in /home/jenkins/minikube-integration/21724-226873/kubeconfig
	I1013 22:02:39.562313  468497 kubeconfig.go:62] /home/jenkins/minikube-integration/21724-226873/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-080337" cluster setting kubeconfig missing "no-preload-080337" context setting]
	I1013 22:02:39.562931  468497 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/kubeconfig: {Name:mk2f336b13d09ff6e6da9e86905651541ce51ca0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:02:39.564521  468497 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1013 22:02:39.572602  468497 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.94.2
	I1013 22:02:39.572634  468497 kubeadm.go:601] duration metric: took 19.828886ms to restartPrimaryControlPlane
	I1013 22:02:39.572645  468497 kubeadm.go:402] duration metric: took 76.357243ms to StartCluster
	I1013 22:02:39.572668  468497 settings.go:142] acquiring lock: {Name:mk13008e3b2fce0e368bddbf00d43b8340210d33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:02:39.572744  468497 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-226873/kubeconfig
	I1013 22:02:39.574598  468497 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/kubeconfig: {Name:mk2f336b13d09ff6e6da9e86905651541ce51ca0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:02:39.574860  468497 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1013 22:02:39.574926  468497 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1013 22:02:39.575051  468497 addons.go:69] Setting storage-provisioner=true in profile "no-preload-080337"
	I1013 22:02:39.575066  468497 addons.go:69] Setting dashboard=true in profile "no-preload-080337"
	I1013 22:02:39.575081  468497 addons.go:69] Setting default-storageclass=true in profile "no-preload-080337"
	I1013 22:02:39.575089  468497 addons.go:238] Setting addon dashboard=true in "no-preload-080337"
	W1013 22:02:39.575098  468497 addons.go:247] addon dashboard should already be in state true
	I1013 22:02:39.575101  468497 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-080337"
	I1013 22:02:39.575138  468497 config.go:182] Loaded profile config "no-preload-080337": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:02:39.575142  468497 host.go:66] Checking if "no-preload-080337" exists ...
	I1013 22:02:39.575072  468497 addons.go:238] Setting addon storage-provisioner=true in "no-preload-080337"
	W1013 22:02:39.575337  468497 addons.go:247] addon storage-provisioner should already be in state true
	I1013 22:02:39.575370  468497 host.go:66] Checking if "no-preload-080337" exists ...
	I1013 22:02:39.575499  468497 cli_runner.go:164] Run: docker container inspect no-preload-080337 --format={{.State.Status}}
	I1013 22:02:39.575739  468497 cli_runner.go:164] Run: docker container inspect no-preload-080337 --format={{.State.Status}}
	I1013 22:02:39.575816  468497 cli_runner.go:164] Run: docker container inspect no-preload-080337 --format={{.State.Status}}
	I1013 22:02:39.577851  468497 out.go:179] * Verifying Kubernetes components...
	I1013 22:02:39.579123  468497 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:02:39.601568  468497 addons.go:238] Setting addon default-storageclass=true in "no-preload-080337"
	W1013 22:02:39.601596  468497 addons.go:247] addon default-storageclass should already be in state true
	I1013 22:02:39.601726  468497 host.go:66] Checking if "no-preload-080337" exists ...
	I1013 22:02:39.602261  468497 cli_runner.go:164] Run: docker container inspect no-preload-080337 --format={{.State.Status}}
	I1013 22:02:39.610133  468497 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1013 22:02:39.611167  468497 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1013 22:02:39.613006  468497 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1013 22:02:39.613073  468497 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 22:02:39.613091  468497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1013 22:02:39.613155  468497 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-080337
	I1013 22:02:39.614464  468497 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1013 22:02:39.614509  468497 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1013 22:02:39.614619  468497 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-080337
	I1013 22:02:39.635067  468497 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1013 22:02:39.635144  468497 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1013 22:02:39.635215  468497 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-080337
	I1013 22:02:39.646805  468497 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/no-preload-080337/id_rsa Username:docker}
	I1013 22:02:39.648273  468497 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/no-preload-080337/id_rsa Username:docker}
	I1013 22:02:39.663266  468497 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/no-preload-080337/id_rsa Username:docker}
	I1013 22:02:39.730347  468497 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 22:02:39.745650  468497 node_ready.go:35] waiting up to 6m0s for node "no-preload-080337" to be "Ready" ...
	I1013 22:02:39.766936  468497 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 22:02:39.768320  468497 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1013 22:02:39.768344  468497 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1013 22:02:39.772331  468497 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1013 22:02:39.784981  468497 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1013 22:02:39.785034  468497 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1013 22:02:39.802933  468497 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1013 22:02:39.802962  468497 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1013 22:02:39.819648  468497 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1013 22:02:39.819675  468497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1013 22:02:39.837090  468497 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1013 22:02:39.837121  468497 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1013 22:02:39.851391  468497 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1013 22:02:39.851414  468497 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1013 22:02:39.864843  468497 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1013 22:02:39.864868  468497 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1013 22:02:39.879437  468497 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1013 22:02:39.879464  468497 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1013 22:02:39.893463  468497 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1013 22:02:39.893489  468497 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1013 22:02:39.908404  468497 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1013 22:02:41.350817  468497 node_ready.go:49] node "no-preload-080337" is "Ready"
	I1013 22:02:41.350858  468497 node_ready.go:38] duration metric: took 1.605171716s for node "no-preload-080337" to be "Ready" ...
	I1013 22:02:41.350876  468497 api_server.go:52] waiting for apiserver process to appear ...
	I1013 22:02:41.350936  468497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 22:02:37.780054  410447 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1013 22:02:37.780396  410447 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1013 22:02:37.780441  410447 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1013 22:02:37.780485  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1013 22:02:37.809107  410447 cri.go:89] found id: "7c043a9cc58769edd4f73332ec4ad2ab1ede59a8e9a0dbb54ce84bce3b265d77"
	I1013 22:02:37.809135  410447 cri.go:89] found id: ""
	I1013 22:02:37.809146  410447 logs.go:282] 1 containers: [7c043a9cc58769edd4f73332ec4ad2ab1ede59a8e9a0dbb54ce84bce3b265d77]
	I1013 22:02:37.809210  410447 ssh_runner.go:195] Run: which crictl
	I1013 22:02:37.815735  410447 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1013 22:02:37.815903  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1013 22:02:37.848735  410447 cri.go:89] found id: ""
	I1013 22:02:37.848760  410447 logs.go:282] 0 containers: []
	W1013 22:02:37.848767  410447 logs.go:284] No container was found matching "etcd"
	I1013 22:02:37.848773  410447 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1013 22:02:37.848819  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1013 22:02:37.878010  410447 cri.go:89] found id: ""
	I1013 22:02:37.878036  410447 logs.go:282] 0 containers: []
	W1013 22:02:37.878043  410447 logs.go:284] No container was found matching "coredns"
	I1013 22:02:37.878050  410447 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1013 22:02:37.878097  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1013 22:02:37.906671  410447 cri.go:89] found id: "d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110"
	I1013 22:02:37.906692  410447 cri.go:89] found id: ""
	I1013 22:02:37.906701  410447 logs.go:282] 1 containers: [d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110]
	I1013 22:02:37.906753  410447 ssh_runner.go:195] Run: which crictl
	I1013 22:02:37.910882  410447 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1013 22:02:37.910962  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1013 22:02:37.942854  410447 cri.go:89] found id: ""
	I1013 22:02:37.942886  410447 logs.go:282] 0 containers: []
	W1013 22:02:37.942898  410447 logs.go:284] No container was found matching "kube-proxy"
	I1013 22:02:37.942906  410447 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1013 22:02:37.942970  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1013 22:02:37.972955  410447 cri.go:89] found id: "f79454099a1733816a4fe6edbc16102f58fa4c29e3f23bb4dec43bbd7666639e"
	I1013 22:02:37.972980  410447 cri.go:89] found id: ""
	I1013 22:02:37.973026  410447 logs.go:282] 1 containers: [f79454099a1733816a4fe6edbc16102f58fa4c29e3f23bb4dec43bbd7666639e]
	I1013 22:02:37.973090  410447 ssh_runner.go:195] Run: which crictl
	I1013 22:02:37.977762  410447 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1013 22:02:37.977842  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1013 22:02:38.008583  410447 cri.go:89] found id: ""
	I1013 22:02:38.008609  410447 logs.go:282] 0 containers: []
	W1013 22:02:38.008620  410447 logs.go:284] No container was found matching "kindnet"
	I1013 22:02:38.008628  410447 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1013 22:02:38.008690  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1013 22:02:38.045953  410447 cri.go:89] found id: ""
	I1013 22:02:38.046019  410447 logs.go:282] 0 containers: []
	W1013 22:02:38.046032  410447 logs.go:284] No container was found matching "storage-provisioner"
	I1013 22:02:38.046044  410447 logs.go:123] Gathering logs for CRI-O ...
	I1013 22:02:38.046060  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1013 22:02:38.115006  410447 logs.go:123] Gathering logs for container status ...
	I1013 22:02:38.115035  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1013 22:02:38.149344  410447 logs.go:123] Gathering logs for kubelet ...
	I1013 22:02:38.149377  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1013 22:02:38.250613  410447 logs.go:123] Gathering logs for dmesg ...
	I1013 22:02:38.250661  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1013 22:02:38.269067  410447 logs.go:123] Gathering logs for describe nodes ...
	I1013 22:02:38.269095  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1013 22:02:38.328918  410447 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1013 22:02:38.328942  410447 logs.go:123] Gathering logs for kube-apiserver [7c043a9cc58769edd4f73332ec4ad2ab1ede59a8e9a0dbb54ce84bce3b265d77] ...
	I1013 22:02:38.328958  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7c043a9cc58769edd4f73332ec4ad2ab1ede59a8e9a0dbb54ce84bce3b265d77"
	I1013 22:02:38.365847  410447 logs.go:123] Gathering logs for kube-scheduler [d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110] ...
	I1013 22:02:38.365889  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110"
	I1013 22:02:38.428209  410447 logs.go:123] Gathering logs for kube-controller-manager [f79454099a1733816a4fe6edbc16102f58fa4c29e3f23bb4dec43bbd7666639e] ...
	I1013 22:02:38.428248  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f79454099a1733816a4fe6edbc16102f58fa4c29e3f23bb4dec43bbd7666639e"
	I1013 22:02:40.958121  410447 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1013 22:02:40.958754  410447 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1013 22:02:40.958825  410447 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1013 22:02:40.958888  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1013 22:02:40.988760  410447 cri.go:89] found id: "7c043a9cc58769edd4f73332ec4ad2ab1ede59a8e9a0dbb54ce84bce3b265d77"
	I1013 22:02:40.988789  410447 cri.go:89] found id: ""
	I1013 22:02:40.988801  410447 logs.go:282] 1 containers: [7c043a9cc58769edd4f73332ec4ad2ab1ede59a8e9a0dbb54ce84bce3b265d77]
	I1013 22:02:40.988864  410447 ssh_runner.go:195] Run: which crictl
	I1013 22:02:40.993162  410447 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1013 22:02:40.993230  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1013 22:02:41.030263  410447 cri.go:89] found id: ""
	I1013 22:02:41.030294  410447 logs.go:282] 0 containers: []
	W1013 22:02:41.030304  410447 logs.go:284] No container was found matching "etcd"
	I1013 22:02:41.030311  410447 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1013 22:02:41.030365  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1013 22:02:41.058295  410447 cri.go:89] found id: ""
	I1013 22:02:41.058326  410447 logs.go:282] 0 containers: []
	W1013 22:02:41.058338  410447 logs.go:284] No container was found matching "coredns"
	I1013 22:02:41.058346  410447 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1013 22:02:41.058404  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1013 22:02:41.085633  410447 cri.go:89] found id: "d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110"
	I1013 22:02:41.085653  410447 cri.go:89] found id: ""
	I1013 22:02:41.085666  410447 logs.go:282] 1 containers: [d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110]
	I1013 22:02:41.085715  410447 ssh_runner.go:195] Run: which crictl
	I1013 22:02:41.089775  410447 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1013 22:02:41.089847  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1013 22:02:41.118433  410447 cri.go:89] found id: ""
	I1013 22:02:41.118458  410447 logs.go:282] 0 containers: []
	W1013 22:02:41.118468  410447 logs.go:284] No container was found matching "kube-proxy"
	I1013 22:02:41.118474  410447 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1013 22:02:41.118530  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1013 22:02:41.156661  410447 cri.go:89] found id: "f79454099a1733816a4fe6edbc16102f58fa4c29e3f23bb4dec43bbd7666639e"
	I1013 22:02:41.156684  410447 cri.go:89] found id: ""
	I1013 22:02:41.156692  410447 logs.go:282] 1 containers: [f79454099a1733816a4fe6edbc16102f58fa4c29e3f23bb4dec43bbd7666639e]
	I1013 22:02:41.156746  410447 ssh_runner.go:195] Run: which crictl
	I1013 22:02:41.160981  410447 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1013 22:02:41.161073  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1013 22:02:41.189021  410447 cri.go:89] found id: ""
	I1013 22:02:41.189048  410447 logs.go:282] 0 containers: []
	W1013 22:02:41.189056  410447 logs.go:284] No container was found matching "kindnet"
	I1013 22:02:41.189061  410447 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1013 22:02:41.189130  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1013 22:02:41.241321  410447 cri.go:89] found id: ""
	I1013 22:02:41.241355  410447 logs.go:282] 0 containers: []
	W1013 22:02:41.241366  410447 logs.go:284] No container was found matching "storage-provisioner"
	I1013 22:02:41.241378  410447 logs.go:123] Gathering logs for kube-controller-manager [f79454099a1733816a4fe6edbc16102f58fa4c29e3f23bb4dec43bbd7666639e] ...
	I1013 22:02:41.241395  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f79454099a1733816a4fe6edbc16102f58fa4c29e3f23bb4dec43bbd7666639e"
	I1013 22:02:41.269672  410447 logs.go:123] Gathering logs for CRI-O ...
	I1013 22:02:41.269699  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1013 22:02:41.371333  410447 logs.go:123] Gathering logs for container status ...
	I1013 22:02:41.371374  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1013 22:02:41.418900  410447 logs.go:123] Gathering logs for kubelet ...
	I1013 22:02:41.418941  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1013 22:02:41.865092  468497 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.098110889s)
	I1013 22:02:41.865163  468497 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.092799484s)
	I1013 22:02:41.865258  468497 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.95680497s)
	I1013 22:02:41.865296  468497 api_server.go:72] duration metric: took 2.290404794s to wait for apiserver process to appear ...
	I1013 22:02:41.865319  468497 api_server.go:88] waiting for apiserver healthz status ...
	I1013 22:02:41.865340  468497 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1013 22:02:41.867297  468497 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-080337 addons enable metrics-server
	
	I1013 22:02:41.870168  468497 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1013 22:02:41.870191  468497 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1013 22:02:41.872961  468497 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1013 22:02:39.523915  464437 pod_ready.go:104] pod "coredns-5dd5756b68-wx29h" is not "Ready", error: <nil>
	W1013 22:02:41.524126  464437 pod_ready.go:104] pod "coredns-5dd5756b68-wx29h" is not "Ready", error: <nil>
	I1013 22:02:41.874122  468497 addons.go:514] duration metric: took 2.299211194s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1013 22:02:42.366150  468497 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1013 22:02:42.371195  468497 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1013 22:02:42.371230  468497 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1013 22:02:42.865472  468497 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1013 22:02:42.869898  468497 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1013 22:02:42.870949  468497 api_server.go:141] control plane version: v1.34.1
	I1013 22:02:42.870974  468497 api_server.go:131] duration metric: took 1.005648302s to wait for apiserver health ...
	I1013 22:02:42.871019  468497 system_pods.go:43] waiting for kube-system pods to appear ...
	I1013 22:02:42.874679  468497 system_pods.go:59] 8 kube-system pods found
	I1013 22:02:42.874736  468497 system_pods.go:61] "coredns-66bc5c9577-n6t7s" [65d002b2-28ab-45b2-aa56-7d828173f096] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 22:02:42.874745  468497 system_pods.go:61] "etcd-no-preload-080337" [a4a602d7-754c-4cf8-a0d7-a882475ae6f6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1013 22:02:42.874754  468497 system_pods.go:61] "kindnet-74766" [055d27fc-daa6-45c6-b0a7-492a6eb17617] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1013 22:02:42.874763  468497 system_pods.go:61] "kube-apiserver-no-preload-080337" [6bc08475-e16f-49d4-8bef-34869194c39b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1013 22:02:42.874771  468497 system_pods.go:61] "kube-controller-manager-no-preload-080337" [0a748245-a74f-4720-9b90-292b647cda1e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1013 22:02:42.874776  468497 system_pods.go:61] "kube-proxy-2scrx" [5f331f5d-7309-4de9-a64c-41a65ee37e7d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1013 22:02:42.874784  468497 system_pods.go:61] "kube-scheduler-no-preload-080337" [35828d1c-f154-4fa0-9d3f-aa12251fa2c1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1013 22:02:42.874792  468497 system_pods.go:61] "storage-provisioner" [f65bcb32-fd36-4634-b3f1-b93eb14848bb] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 22:02:42.874799  468497 system_pods.go:74] duration metric: took 3.768814ms to wait for pod list to return data ...
	I1013 22:02:42.874811  468497 default_sa.go:34] waiting for default service account to be created ...
	I1013 22:02:42.877256  468497 default_sa.go:45] found service account: "default"
	I1013 22:02:42.877277  468497 default_sa.go:55] duration metric: took 2.461091ms for default service account to be created ...
	I1013 22:02:42.877286  468497 system_pods.go:116] waiting for k8s-apps to be running ...
	I1013 22:02:42.879949  468497 system_pods.go:86] 8 kube-system pods found
	I1013 22:02:42.879973  468497 system_pods.go:89] "coredns-66bc5c9577-n6t7s" [65d002b2-28ab-45b2-aa56-7d828173f096] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 22:02:42.880005  468497 system_pods.go:89] "etcd-no-preload-080337" [a4a602d7-754c-4cf8-a0d7-a882475ae6f6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1013 22:02:42.880017  468497 system_pods.go:89] "kindnet-74766" [055d27fc-daa6-45c6-b0a7-492a6eb17617] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1013 22:02:42.880029  468497 system_pods.go:89] "kube-apiserver-no-preload-080337" [6bc08475-e16f-49d4-8bef-34869194c39b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1013 22:02:42.880037  468497 system_pods.go:89] "kube-controller-manager-no-preload-080337" [0a748245-a74f-4720-9b90-292b647cda1e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1013 22:02:42.880045  468497 system_pods.go:89] "kube-proxy-2scrx" [5f331f5d-7309-4de9-a64c-41a65ee37e7d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1013 22:02:42.880054  468497 system_pods.go:89] "kube-scheduler-no-preload-080337" [35828d1c-f154-4fa0-9d3f-aa12251fa2c1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1013 22:02:42.880062  468497 system_pods.go:89] "storage-provisioner" [f65bcb32-fd36-4634-b3f1-b93eb14848bb] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 22:02:42.880070  468497 system_pods.go:126] duration metric: took 2.778707ms to wait for k8s-apps to be running ...
	I1013 22:02:42.880081  468497 system_svc.go:44] waiting for kubelet service to be running ....
	I1013 22:02:42.880131  468497 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 22:02:42.893782  468497 system_svc.go:56] duration metric: took 13.687554ms WaitForService to wait for kubelet
	I1013 22:02:42.893810  468497 kubeadm.go:586] duration metric: took 3.318922405s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 22:02:42.893833  468497 node_conditions.go:102] verifying NodePressure condition ...
	I1013 22:02:42.896905  468497 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1013 22:02:42.896931  468497 node_conditions.go:123] node cpu capacity is 8
	I1013 22:02:42.896944  468497 node_conditions.go:105] duration metric: took 3.105692ms to run NodePressure ...
	I1013 22:02:42.896958  468497 start.go:241] waiting for startup goroutines ...
	I1013 22:02:42.896967  468497 start.go:246] waiting for cluster config update ...
	I1013 22:02:42.896984  468497 start.go:255] writing updated cluster config ...
	I1013 22:02:42.897298  468497 ssh_runner.go:195] Run: rm -f paused
	I1013 22:02:42.901241  468497 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 22:02:42.904500  468497 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-n6t7s" in "kube-system" namespace to be "Ready" or be gone ...
	W1013 22:02:44.909353  468497 pod_ready.go:104] pod "coredns-66bc5c9577-n6t7s" is not "Ready", error: <nil>
	I1013 22:02:41.532460  410447 logs.go:123] Gathering logs for dmesg ...
	I1013 22:02:41.532557  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1013 22:02:41.551415  410447 logs.go:123] Gathering logs for describe nodes ...
	I1013 22:02:41.551449  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1013 22:02:41.619791  410447 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1013 22:02:41.619814  410447 logs.go:123] Gathering logs for kube-apiserver [7c043a9cc58769edd4f73332ec4ad2ab1ede59a8e9a0dbb54ce84bce3b265d77] ...
	I1013 22:02:41.619828  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7c043a9cc58769edd4f73332ec4ad2ab1ede59a8e9a0dbb54ce84bce3b265d77"
	I1013 22:02:41.659488  410447 logs.go:123] Gathering logs for kube-scheduler [d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110] ...
	I1013 22:02:41.659528  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110"
	I1013 22:02:44.231879  410447 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1013 22:02:44.232419  410447 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1013 22:02:44.232487  410447 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1013 22:02:44.232550  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1013 22:02:44.260088  410447 cri.go:89] found id: "7c043a9cc58769edd4f73332ec4ad2ab1ede59a8e9a0dbb54ce84bce3b265d77"
	I1013 22:02:44.260113  410447 cri.go:89] found id: ""
	I1013 22:02:44.260125  410447 logs.go:282] 1 containers: [7c043a9cc58769edd4f73332ec4ad2ab1ede59a8e9a0dbb54ce84bce3b265d77]
	I1013 22:02:44.260186  410447 ssh_runner.go:195] Run: which crictl
	I1013 22:02:44.264375  410447 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1013 22:02:44.264449  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1013 22:02:44.292492  410447 cri.go:89] found id: ""
	I1013 22:02:44.292519  410447 logs.go:282] 0 containers: []
	W1013 22:02:44.292530  410447 logs.go:284] No container was found matching "etcd"
	I1013 22:02:44.292538  410447 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1013 22:02:44.292595  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1013 22:02:44.320003  410447 cri.go:89] found id: ""
	I1013 22:02:44.320039  410447 logs.go:282] 0 containers: []
	W1013 22:02:44.320049  410447 logs.go:284] No container was found matching "coredns"
	I1013 22:02:44.320056  410447 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1013 22:02:44.320118  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1013 22:02:44.348214  410447 cri.go:89] found id: "d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110"
	I1013 22:02:44.348235  410447 cri.go:89] found id: ""
	I1013 22:02:44.348243  410447 logs.go:282] 1 containers: [d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110]
	I1013 22:02:44.348291  410447 ssh_runner.go:195] Run: which crictl
	I1013 22:02:44.352430  410447 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1013 22:02:44.352489  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1013 22:02:44.380611  410447 cri.go:89] found id: ""
	I1013 22:02:44.380642  410447 logs.go:282] 0 containers: []
	W1013 22:02:44.380652  410447 logs.go:284] No container was found matching "kube-proxy"
	I1013 22:02:44.380660  410447 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1013 22:02:44.380726  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1013 22:02:44.409423  410447 cri.go:89] found id: "f79454099a1733816a4fe6edbc16102f58fa4c29e3f23bb4dec43bbd7666639e"
	I1013 22:02:44.409449  410447 cri.go:89] found id: ""
	I1013 22:02:44.409460  410447 logs.go:282] 1 containers: [f79454099a1733816a4fe6edbc16102f58fa4c29e3f23bb4dec43bbd7666639e]
	I1013 22:02:44.409525  410447 ssh_runner.go:195] Run: which crictl
	I1013 22:02:44.414047  410447 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1013 22:02:44.414113  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1013 22:02:44.442090  410447 cri.go:89] found id: ""
	I1013 22:02:44.442121  410447 logs.go:282] 0 containers: []
	W1013 22:02:44.442133  410447 logs.go:284] No container was found matching "kindnet"
	I1013 22:02:44.442142  410447 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1013 22:02:44.442213  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1013 22:02:44.471017  410447 cri.go:89] found id: ""
	I1013 22:02:44.471048  410447 logs.go:282] 0 containers: []
	W1013 22:02:44.471058  410447 logs.go:284] No container was found matching "storage-provisioner"
	I1013 22:02:44.471093  410447 logs.go:123] Gathering logs for container status ...
	I1013 22:02:44.471107  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1013 22:02:44.501918  410447 logs.go:123] Gathering logs for kubelet ...
	I1013 22:02:44.501947  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1013 22:02:44.613768  410447 logs.go:123] Gathering logs for dmesg ...
	I1013 22:02:44.613811  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1013 22:02:44.635799  410447 logs.go:123] Gathering logs for describe nodes ...
	I1013 22:02:44.635846  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1013 22:02:44.710984  410447 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1013 22:02:44.711053  410447 logs.go:123] Gathering logs for kube-apiserver [7c043a9cc58769edd4f73332ec4ad2ab1ede59a8e9a0dbb54ce84bce3b265d77] ...
	I1013 22:02:44.711069  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7c043a9cc58769edd4f73332ec4ad2ab1ede59a8e9a0dbb54ce84bce3b265d77"
	I1013 22:02:44.746750  410447 logs.go:123] Gathering logs for kube-scheduler [d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110] ...
	I1013 22:02:44.746783  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110"
	I1013 22:02:44.806252  410447 logs.go:123] Gathering logs for kube-controller-manager [f79454099a1733816a4fe6edbc16102f58fa4c29e3f23bb4dec43bbd7666639e] ...
	I1013 22:02:44.806306  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f79454099a1733816a4fe6edbc16102f58fa4c29e3f23bb4dec43bbd7666639e"
	I1013 22:02:44.834583  410447 logs.go:123] Gathering logs for CRI-O ...
	I1013 22:02:44.834611  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1013 22:02:44.022522  464437 pod_ready.go:104] pod "coredns-5dd5756b68-wx29h" is not "Ready", error: <nil>
	W1013 22:02:46.023693  464437 pod_ready.go:104] pod "coredns-5dd5756b68-wx29h" is not "Ready", error: <nil>
	I1013 22:02:48.023902  464437 pod_ready.go:94] pod "coredns-5dd5756b68-wx29h" is "Ready"
	I1013 22:02:48.023933  464437 pod_ready.go:86] duration metric: took 30.50694283s for pod "coredns-5dd5756b68-wx29h" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:02:48.027171  464437 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-534822" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:02:48.032059  464437 pod_ready.go:94] pod "etcd-old-k8s-version-534822" is "Ready"
	I1013 22:02:48.032090  464437 pod_ready.go:86] duration metric: took 4.894293ms for pod "etcd-old-k8s-version-534822" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:02:48.035435  464437 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-534822" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:02:48.039660  464437 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-534822" is "Ready"
	I1013 22:02:48.039683  464437 pod_ready.go:86] duration metric: took 4.225832ms for pod "kube-apiserver-old-k8s-version-534822" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:02:48.042425  464437 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-534822" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:02:48.221186  464437 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-534822" is "Ready"
	I1013 22:02:48.221220  464437 pod_ready.go:86] duration metric: took 178.773173ms for pod "kube-controller-manager-old-k8s-version-534822" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:02:48.422201  464437 pod_ready.go:83] waiting for pod "kube-proxy-dvt68" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:02:48.821935  464437 pod_ready.go:94] pod "kube-proxy-dvt68" is "Ready"
	I1013 22:02:48.821967  464437 pod_ready.go:86] duration metric: took 399.742437ms for pod "kube-proxy-dvt68" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:02:49.022341  464437 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-534822" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:02:49.421231  464437 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-534822" is "Ready"
	I1013 22:02:49.421266  464437 pod_ready.go:86] duration metric: took 398.895504ms for pod "kube-scheduler-old-k8s-version-534822" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:02:49.421282  464437 pod_ready.go:40] duration metric: took 31.908818111s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 22:02:49.479762  464437 start.go:624] kubectl: 1.34.1, cluster: 1.28.0 (minor skew: 6)
	I1013 22:02:49.481728  464437 out.go:203] 
	W1013 22:02:49.483316  464437 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.28.0.
	I1013 22:02:49.485281  464437 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1013 22:02:49.486544  464437 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-534822" cluster and "default" namespace by default
	W1013 22:02:46.909961  468497 pod_ready.go:104] pod "coredns-66bc5c9577-n6t7s" is not "Ready", error: <nil>
	W1013 22:02:48.910478  468497 pod_ready.go:104] pod "coredns-66bc5c9577-n6t7s" is not "Ready", error: <nil>
	W1013 22:02:50.910649  468497 pod_ready.go:104] pod "coredns-66bc5c9577-n6t7s" is not "Ready", error: <nil>
	I1013 22:02:47.392196  410447 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1013 22:02:47.392780  410447 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1013 22:02:47.392854  410447 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1013 22:02:47.392913  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1013 22:02:47.425187  410447 cri.go:89] found id: "7c043a9cc58769edd4f73332ec4ad2ab1ede59a8e9a0dbb54ce84bce3b265d77"
	I1013 22:02:47.425214  410447 cri.go:89] found id: ""
	I1013 22:02:47.425223  410447 logs.go:282] 1 containers: [7c043a9cc58769edd4f73332ec4ad2ab1ede59a8e9a0dbb54ce84bce3b265d77]
	I1013 22:02:47.425281  410447 ssh_runner.go:195] Run: which crictl
	I1013 22:02:47.430145  410447 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1013 22:02:47.430213  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1013 22:02:47.461185  410447 cri.go:89] found id: ""
	I1013 22:02:47.461224  410447 logs.go:282] 0 containers: []
	W1013 22:02:47.461235  410447 logs.go:284] No container was found matching "etcd"
	I1013 22:02:47.461243  410447 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1013 22:02:47.461294  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1013 22:02:47.491781  410447 cri.go:89] found id: ""
	I1013 22:02:47.491809  410447 logs.go:282] 0 containers: []
	W1013 22:02:47.491821  410447 logs.go:284] No container was found matching "coredns"
	I1013 22:02:47.491829  410447 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1013 22:02:47.491888  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1013 22:02:47.526155  410447 cri.go:89] found id: "d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110"
	I1013 22:02:47.526182  410447 cri.go:89] found id: ""
	I1013 22:02:47.526193  410447 logs.go:282] 1 containers: [d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110]
	I1013 22:02:47.526261  410447 ssh_runner.go:195] Run: which crictl
	I1013 22:02:47.531216  410447 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1013 22:02:47.531288  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1013 22:02:47.561224  410447 cri.go:89] found id: ""
	I1013 22:02:47.561255  410447 logs.go:282] 0 containers: []
	W1013 22:02:47.561268  410447 logs.go:284] No container was found matching "kube-proxy"
	I1013 22:02:47.561276  410447 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1013 22:02:47.561339  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1013 22:02:47.591962  410447 cri.go:89] found id: "f79454099a1733816a4fe6edbc16102f58fa4c29e3f23bb4dec43bbd7666639e"
	I1013 22:02:47.592020  410447 cri.go:89] found id: ""
	I1013 22:02:47.592033  410447 logs.go:282] 1 containers: [f79454099a1733816a4fe6edbc16102f58fa4c29e3f23bb4dec43bbd7666639e]
	I1013 22:02:47.592197  410447 ssh_runner.go:195] Run: which crictl
	I1013 22:02:47.597432  410447 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1013 22:02:47.597511  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1013 22:02:47.627871  410447 cri.go:89] found id: ""
	I1013 22:02:47.627902  410447 logs.go:282] 0 containers: []
	W1013 22:02:47.627914  410447 logs.go:284] No container was found matching "kindnet"
	I1013 22:02:47.627923  410447 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1013 22:02:47.627985  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1013 22:02:47.660325  410447 cri.go:89] found id: ""
	I1013 22:02:47.660358  410447 logs.go:282] 0 containers: []
	W1013 22:02:47.660369  410447 logs.go:284] No container was found matching "storage-provisioner"
	I1013 22:02:47.660383  410447 logs.go:123] Gathering logs for kubelet ...
	I1013 22:02:47.660398  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1013 22:02:47.758089  410447 logs.go:123] Gathering logs for dmesg ...
	I1013 22:02:47.758130  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1013 22:02:47.774259  410447 logs.go:123] Gathering logs for describe nodes ...
	I1013 22:02:47.774289  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1013 22:02:47.830519  410447 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1013 22:02:47.830542  410447 logs.go:123] Gathering logs for kube-apiserver [7c043a9cc58769edd4f73332ec4ad2ab1ede59a8e9a0dbb54ce84bce3b265d77] ...
	I1013 22:02:47.830557  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7c043a9cc58769edd4f73332ec4ad2ab1ede59a8e9a0dbb54ce84bce3b265d77"
	I1013 22:02:47.862953  410447 logs.go:123] Gathering logs for kube-scheduler [d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110] ...
	I1013 22:02:47.862984  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110"
	I1013 22:02:47.924740  410447 logs.go:123] Gathering logs for kube-controller-manager [f79454099a1733816a4fe6edbc16102f58fa4c29e3f23bb4dec43bbd7666639e] ...
	I1013 22:02:47.924792  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f79454099a1733816a4fe6edbc16102f58fa4c29e3f23bb4dec43bbd7666639e"
	I1013 22:02:47.957849  410447 logs.go:123] Gathering logs for CRI-O ...
	I1013 22:02:47.957888  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1013 22:02:48.035284  410447 logs.go:123] Gathering logs for container status ...
	I1013 22:02:48.035323  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1013 22:02:50.570211  410447 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1013 22:02:50.570748  410447 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1013 22:02:50.570820  410447 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1013 22:02:50.570884  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1013 22:02:50.611411  410447 cri.go:89] found id: "7c043a9cc58769edd4f73332ec4ad2ab1ede59a8e9a0dbb54ce84bce3b265d77"
	I1013 22:02:50.611437  410447 cri.go:89] found id: ""
	I1013 22:02:50.611448  410447 logs.go:282] 1 containers: [7c043a9cc58769edd4f73332ec4ad2ab1ede59a8e9a0dbb54ce84bce3b265d77]
	I1013 22:02:50.611516  410447 ssh_runner.go:195] Run: which crictl
	I1013 22:02:50.616880  410447 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1013 22:02:50.616958  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1013 22:02:50.655296  410447 cri.go:89] found id: ""
	I1013 22:02:50.655329  410447 logs.go:282] 0 containers: []
	W1013 22:02:50.655340  410447 logs.go:284] No container was found matching "etcd"
	I1013 22:02:50.655347  410447 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1013 22:02:50.655411  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1013 22:02:50.692420  410447 cri.go:89] found id: ""
	I1013 22:02:50.692452  410447 logs.go:282] 0 containers: []
	W1013 22:02:50.692465  410447 logs.go:284] No container was found matching "coredns"
	I1013 22:02:50.692473  410447 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1013 22:02:50.692538  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1013 22:02:50.730578  410447 cri.go:89] found id: "d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110"
	I1013 22:02:50.730668  410447 cri.go:89] found id: ""
	I1013 22:02:50.730685  410447 logs.go:282] 1 containers: [d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110]
	I1013 22:02:50.730746  410447 ssh_runner.go:195] Run: which crictl
	I1013 22:02:50.736202  410447 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1013 22:02:50.736283  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1013 22:02:50.774415  410447 cri.go:89] found id: ""
	I1013 22:02:50.774448  410447 logs.go:282] 0 containers: []
	W1013 22:02:50.774459  410447 logs.go:284] No container was found matching "kube-proxy"
	I1013 22:02:50.774469  410447 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1013 22:02:50.774544  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1013 22:02:50.811863  410447 cri.go:89] found id: "f79454099a1733816a4fe6edbc16102f58fa4c29e3f23bb4dec43bbd7666639e"
	I1013 22:02:50.811891  410447 cri.go:89] found id: ""
	I1013 22:02:50.811902  410447 logs.go:282] 1 containers: [f79454099a1733816a4fe6edbc16102f58fa4c29e3f23bb4dec43bbd7666639e]
	I1013 22:02:50.811973  410447 ssh_runner.go:195] Run: which crictl
	I1013 22:02:50.817464  410447 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1013 22:02:50.817545  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1013 22:02:50.854765  410447 cri.go:89] found id: ""
	I1013 22:02:50.854790  410447 logs.go:282] 0 containers: []
	W1013 22:02:50.854797  410447 logs.go:284] No container was found matching "kindnet"
	I1013 22:02:50.854809  410447 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1013 22:02:50.854859  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1013 22:02:50.893815  410447 cri.go:89] found id: ""
	I1013 22:02:50.893843  410447 logs.go:282] 0 containers: []
	W1013 22:02:50.893853  410447 logs.go:284] No container was found matching "storage-provisioner"
	I1013 22:02:50.893865  410447 logs.go:123] Gathering logs for kubelet ...
	I1013 22:02:50.893881  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1013 22:02:51.010315  410447 logs.go:123] Gathering logs for dmesg ...
	I1013 22:02:51.010354  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1013 22:02:51.027042  410447 logs.go:123] Gathering logs for describe nodes ...
	I1013 22:02:51.027081  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1013 22:02:51.089074  410447 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1013 22:02:51.089101  410447 logs.go:123] Gathering logs for kube-apiserver [7c043a9cc58769edd4f73332ec4ad2ab1ede59a8e9a0dbb54ce84bce3b265d77] ...
	I1013 22:02:51.089123  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7c043a9cc58769edd4f73332ec4ad2ab1ede59a8e9a0dbb54ce84bce3b265d77"
	I1013 22:02:51.129206  410447 logs.go:123] Gathering logs for kube-scheduler [d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110] ...
	I1013 22:02:51.129244  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110"
	I1013 22:02:51.193654  410447 logs.go:123] Gathering logs for kube-controller-manager [f79454099a1733816a4fe6edbc16102f58fa4c29e3f23bb4dec43bbd7666639e] ...
	I1013 22:02:51.193690  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f79454099a1733816a4fe6edbc16102f58fa4c29e3f23bb4dec43bbd7666639e"
	I1013 22:02:51.223197  410447 logs.go:123] Gathering logs for CRI-O ...
	I1013 22:02:51.223235  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1013 22:02:51.289435  410447 logs.go:123] Gathering logs for container status ...
	I1013 22:02:51.289484  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1013 22:02:53.409900  468497 pod_ready.go:104] pod "coredns-66bc5c9577-n6t7s" is not "Ready", error: <nil>
	W1013 22:02:55.410244  468497 pod_ready.go:104] pod "coredns-66bc5c9577-n6t7s" is not "Ready", error: <nil>
	I1013 22:02:53.826089  410447 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1013 22:02:53.826483  410447 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1013 22:02:53.826530  410447 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1013 22:02:53.826579  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1013 22:02:53.854628  410447 cri.go:89] found id: "7c043a9cc58769edd4f73332ec4ad2ab1ede59a8e9a0dbb54ce84bce3b265d77"
	I1013 22:02:53.854653  410447 cri.go:89] found id: ""
	I1013 22:02:53.854661  410447 logs.go:282] 1 containers: [7c043a9cc58769edd4f73332ec4ad2ab1ede59a8e9a0dbb54ce84bce3b265d77]
	I1013 22:02:53.854710  410447 ssh_runner.go:195] Run: which crictl
	I1013 22:02:53.858781  410447 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1013 22:02:53.858840  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1013 22:02:53.889567  410447 cri.go:89] found id: ""
	I1013 22:02:53.889596  410447 logs.go:282] 0 containers: []
	W1013 22:02:53.889607  410447 logs.go:284] No container was found matching "etcd"
	I1013 22:02:53.889615  410447 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1013 22:02:53.889685  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1013 22:02:53.920253  410447 cri.go:89] found id: ""
	I1013 22:02:53.920277  410447 logs.go:282] 0 containers: []
	W1013 22:02:53.920288  410447 logs.go:284] No container was found matching "coredns"
	I1013 22:02:53.920295  410447 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1013 22:02:53.920350  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1013 22:02:53.949080  410447 cri.go:89] found id: "d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110"
	I1013 22:02:53.949108  410447 cri.go:89] found id: ""
	I1013 22:02:53.949119  410447 logs.go:282] 1 containers: [d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110]
	I1013 22:02:53.949173  410447 ssh_runner.go:195] Run: which crictl
	I1013 22:02:53.953407  410447 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1013 22:02:53.953475  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1013 22:02:53.984141  410447 cri.go:89] found id: ""
	I1013 22:02:53.984170  410447 logs.go:282] 0 containers: []
	W1013 22:02:53.984178  410447 logs.go:284] No container was found matching "kube-proxy"
	I1013 22:02:53.984184  410447 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1013 22:02:53.984239  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1013 22:02:54.012982  410447 cri.go:89] found id: "f79454099a1733816a4fe6edbc16102f58fa4c29e3f23bb4dec43bbd7666639e"
	I1013 22:02:54.013022  410447 cri.go:89] found id: ""
	I1013 22:02:54.013033  410447 logs.go:282] 1 containers: [f79454099a1733816a4fe6edbc16102f58fa4c29e3f23bb4dec43bbd7666639e]
	I1013 22:02:54.013095  410447 ssh_runner.go:195] Run: which crictl
	I1013 22:02:54.018349  410447 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1013 22:02:54.018418  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1013 22:02:54.045371  410447 cri.go:89] found id: ""
	I1013 22:02:54.045398  410447 logs.go:282] 0 containers: []
	W1013 22:02:54.045409  410447 logs.go:284] No container was found matching "kindnet"
	I1013 22:02:54.045417  410447 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1013 22:02:54.045476  410447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1013 22:02:54.072336  410447 cri.go:89] found id: ""
	I1013 22:02:54.072363  410447 logs.go:282] 0 containers: []
	W1013 22:02:54.072375  410447 logs.go:284] No container was found matching "storage-provisioner"
	I1013 22:02:54.072408  410447 logs.go:123] Gathering logs for kubelet ...
	I1013 22:02:54.072429  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1013 22:02:54.170941  410447 logs.go:123] Gathering logs for dmesg ...
	I1013 22:02:54.170978  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1013 22:02:54.186438  410447 logs.go:123] Gathering logs for describe nodes ...
	I1013 22:02:54.186466  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1013 22:02:54.243600  410447 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1013 22:02:54.243621  410447 logs.go:123] Gathering logs for kube-apiserver [7c043a9cc58769edd4f73332ec4ad2ab1ede59a8e9a0dbb54ce84bce3b265d77] ...
	I1013 22:02:54.243635  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7c043a9cc58769edd4f73332ec4ad2ab1ede59a8e9a0dbb54ce84bce3b265d77"
	I1013 22:02:54.278431  410447 logs.go:123] Gathering logs for kube-scheduler [d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110] ...
	I1013 22:02:54.278470  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d9816597d5c42a73ba873184013e2f5bf8599d29cad04804c24258713a6c1110"
	I1013 22:02:54.335060  410447 logs.go:123] Gathering logs for kube-controller-manager [f79454099a1733816a4fe6edbc16102f58fa4c29e3f23bb4dec43bbd7666639e] ...
	I1013 22:02:54.335095  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f79454099a1733816a4fe6edbc16102f58fa4c29e3f23bb4dec43bbd7666639e"
	I1013 22:02:54.363176  410447 logs.go:123] Gathering logs for CRI-O ...
	I1013 22:02:54.363212  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1013 22:02:54.425075  410447 logs.go:123] Gathering logs for container status ...
	I1013 22:02:54.425113  410447 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1013 22:02:57.410576  468497 pod_ready.go:104] pod "coredns-66bc5c9577-n6t7s" is not "Ready", error: <nil>
	W1013 22:02:59.911539  468497 pod_ready.go:104] pod "coredns-66bc5c9577-n6t7s" is not "Ready", error: <nil>
	I1013 22:02:56.957686  410447 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1013 22:02:56.958159  410447 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1013 22:02:56.958236  410447 kubeadm.go:601] duration metric: took 4m5.072860176s to restartPrimaryControlPlane
	W1013 22:02:56.958307  410447 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1013 22:02:56.958374  410447 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1013 22:02:57.537126  410447 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 22:02:57.551286  410447 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1013 22:02:57.560556  410447 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1013 22:02:57.560612  410447 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1013 22:02:57.569624  410447 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1013 22:02:57.569642  410447 kubeadm.go:157] found existing configuration files:
	
	I1013 22:02:57.569707  410447 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1013 22:02:57.578023  410447 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1013 22:02:57.578082  410447 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1013 22:02:57.586286  410447 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1013 22:02:57.594544  410447 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1013 22:02:57.594612  410447 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1013 22:02:57.602983  410447 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1013 22:02:57.611067  410447 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1013 22:02:57.611122  410447 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1013 22:02:57.618802  410447 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1013 22:02:57.626758  410447 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1013 22:02:57.626810  410447 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1013 22:02:57.634899  410447 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1013 22:02:57.695457  410447 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1013 22:02:57.754172  410447 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	
	
	==> CRI-O <==
	Oct 13 22:02:36 old-k8s-version-534822 crio[561]: time="2025-10-13T22:02:36.12959146Z" level=info msg="Started container" PID=1734 containerID=5d904bfd1c88bf4d4541a63d27bc753cbcaa659f72584f5913675e8900ba16fc description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jfmmb/dashboard-metrics-scraper id=5bccb2ae-13c6-486a-a088-c6435c32e745 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6cad5526c4d24689eda97e66fbf0bdb7054c249724461a791341dcaca24eb38c
	Oct 13 22:02:37 old-k8s-version-534822 crio[561]: time="2025-10-13T22:02:37.08537415Z" level=info msg="Removing container: 9714b7008811138922472aa2965eb8263afeb46dd4e028180835324f0267bd1c" id=9a7a9274-5ee2-4360-aab6-b4ef11897f7b name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 13 22:02:37 old-k8s-version-534822 crio[561]: time="2025-10-13T22:02:37.100054572Z" level=info msg="Removed container 9714b7008811138922472aa2965eb8263afeb46dd4e028180835324f0267bd1c: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jfmmb/dashboard-metrics-scraper" id=9a7a9274-5ee2-4360-aab6-b4ef11897f7b name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 13 22:02:48 old-k8s-version-534822 crio[561]: time="2025-10-13T22:02:48.11642424Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=1810179e-6ed0-484a-9acf-d491d730cd4a name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:02:48 old-k8s-version-534822 crio[561]: time="2025-10-13T22:02:48.117385664Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=d210ef85-3bdb-4c51-a044-44a0059714e6 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:02:48 old-k8s-version-534822 crio[561]: time="2025-10-13T22:02:48.118372655Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=f4a497bf-787b-47b6-b9f2-10765703ba11 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:02:48 old-k8s-version-534822 crio[561]: time="2025-10-13T22:02:48.118675721Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:02:48 old-k8s-version-534822 crio[561]: time="2025-10-13T22:02:48.122986218Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:02:48 old-k8s-version-534822 crio[561]: time="2025-10-13T22:02:48.123217827Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/65d9ce38fe5f67ecc36d1529191bc70690b99903d16fe0b0d6744d477c04f873/merged/etc/passwd: no such file or directory"
	Oct 13 22:02:48 old-k8s-version-534822 crio[561]: time="2025-10-13T22:02:48.123251952Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/65d9ce38fe5f67ecc36d1529191bc70690b99903d16fe0b0d6744d477c04f873/merged/etc/group: no such file or directory"
	Oct 13 22:02:48 old-k8s-version-534822 crio[561]: time="2025-10-13T22:02:48.123570692Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:02:48 old-k8s-version-534822 crio[561]: time="2025-10-13T22:02:48.146117142Z" level=info msg="Created container 37228398f0d5b8da9cd2c42cbd3f96b5b2291545f591979cceded9621f58cafc: kube-system/storage-provisioner/storage-provisioner" id=f4a497bf-787b-47b6-b9f2-10765703ba11 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:02:48 old-k8s-version-534822 crio[561]: time="2025-10-13T22:02:48.146856058Z" level=info msg="Starting container: 37228398f0d5b8da9cd2c42cbd3f96b5b2291545f591979cceded9621f58cafc" id=26259238-6736-4b47-acc1-29f64ca2988f name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 22:02:48 old-k8s-version-534822 crio[561]: time="2025-10-13T22:02:48.148695404Z" level=info msg="Started container" PID=1752 containerID=37228398f0d5b8da9cd2c42cbd3f96b5b2291545f591979cceded9621f58cafc description=kube-system/storage-provisioner/storage-provisioner id=26259238-6736-4b47-acc1-29f64ca2988f name=/runtime.v1.RuntimeService/StartContainer sandboxID=571943ba0d878876d8c48de5a3dee70063b7bc93c99564879691a6d486956f6c
	Oct 13 22:02:51 old-k8s-version-534822 crio[561]: time="2025-10-13T22:02:51.001004573Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=bc46fcf3-a92a-4ed7-8c90-4f8eb90ce15f name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:02:51 old-k8s-version-534822 crio[561]: time="2025-10-13T22:02:51.023723828Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=bab9bb86-e987-493d-b007-bdb5a39ceec2 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:02:51 old-k8s-version-534822 crio[561]: time="2025-10-13T22:02:51.025033181Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jfmmb/dashboard-metrics-scraper" id=a423b561-465d-4b66-8021-6bad57e1235b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:02:51 old-k8s-version-534822 crio[561]: time="2025-10-13T22:02:51.025304434Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:02:51 old-k8s-version-534822 crio[561]: time="2025-10-13T22:02:51.065536426Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:02:51 old-k8s-version-534822 crio[561]: time="2025-10-13T22:02:51.066147984Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:02:51 old-k8s-version-534822 crio[561]: time="2025-10-13T22:02:51.111973631Z" level=info msg="Created container 24a60a5877551a0b14faf87f3bd9b57fc758102f99a010ec769dd51aefc1de46: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jfmmb/dashboard-metrics-scraper" id=a423b561-465d-4b66-8021-6bad57e1235b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:02:51 old-k8s-version-534822 crio[561]: time="2025-10-13T22:02:51.112729116Z" level=info msg="Starting container: 24a60a5877551a0b14faf87f3bd9b57fc758102f99a010ec769dd51aefc1de46" id=ee245e28-6adb-490d-86bf-33abb67fd2ab name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 22:02:51 old-k8s-version-534822 crio[561]: time="2025-10-13T22:02:51.115305853Z" level=info msg="Started container" PID=1785 containerID=24a60a5877551a0b14faf87f3bd9b57fc758102f99a010ec769dd51aefc1de46 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jfmmb/dashboard-metrics-scraper id=ee245e28-6adb-490d-86bf-33abb67fd2ab name=/runtime.v1.RuntimeService/StartContainer sandboxID=6cad5526c4d24689eda97e66fbf0bdb7054c249724461a791341dcaca24eb38c
	Oct 13 22:02:52 old-k8s-version-534822 crio[561]: time="2025-10-13T22:02:52.13374535Z" level=info msg="Removing container: 5d904bfd1c88bf4d4541a63d27bc753cbcaa659f72584f5913675e8900ba16fc" id=b5e20a0a-2e12-44b5-a7eb-a5adb336502d name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 13 22:02:52 old-k8s-version-534822 crio[561]: time="2025-10-13T22:02:52.144784478Z" level=info msg="Removed container 5d904bfd1c88bf4d4541a63d27bc753cbcaa659f72584f5913675e8900ba16fc: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jfmmb/dashboard-metrics-scraper" id=b5e20a0a-2e12-44b5-a7eb-a5adb336502d name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	24a60a5877551       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           13 seconds ago      Exited              dashboard-metrics-scraper   2                   6cad5526c4d24       dashboard-metrics-scraper-5f989dc9cf-jfmmb       kubernetes-dashboard
	37228398f0d5b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           16 seconds ago      Running             storage-provisioner         1                   571943ba0d878       storage-provisioner                              kube-system
	bd7cae91130f0       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   31 seconds ago      Running             kubernetes-dashboard        0                   606c28eefcf3d       kubernetes-dashboard-8694d4445c-85qc8            kubernetes-dashboard
	0babcc5fd2693       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           47 seconds ago      Running             busybox                     1                   617a3124b10b7       busybox                                          default
	c111ec4bcc5b1       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           47 seconds ago      Running             coredns                     0                   84eb710fbef5f       coredns-5dd5756b68-wx29h                         kube-system
	c5ef5eaa11496       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           47 seconds ago      Running             kube-proxy                  0                   b96e4c5e23477       kube-proxy-dvt68                                 kube-system
	b624ac084d77a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           47 seconds ago      Exited              storage-provisioner         0                   571943ba0d878       storage-provisioner                              kube-system
	e04dc6fa107a5       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           47 seconds ago      Running             kindnet-cni                 0                   72151db667390       kindnet-snc6w                                    kube-system
	8f0311ea43bb5       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           50 seconds ago      Running             kube-controller-manager     0                   e94a3fe95484d       kube-controller-manager-old-k8s-version-534822   kube-system
	a8a18841fbef4       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           50 seconds ago      Running             kube-apiserver              0                   be7c7640ae258       kube-apiserver-old-k8s-version-534822            kube-system
	90f9e9007916b       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           50 seconds ago      Running             kube-scheduler              0                   e938191aa40fb       kube-scheduler-old-k8s-version-534822            kube-system
	fef3d5bba9942       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           50 seconds ago      Running             etcd                        0                   d9a644625ff23       etcd-old-k8s-version-534822                      kube-system
	
	
	==> coredns [c111ec4bcc5b125ec48f663bc7cd06e29efb01497a18ce0020efd3eaff6f1fd1] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 25cf5af2951e282c4b0e961a02fb5d3e57c974501832fee92eec17b5135b9ec9d9e87d2ac94e6d117a5ed3dd54e8800aa7b4479706eb54497145ccdb80397d1b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:48964 - 41550 "HINFO IN 6307983208634643048.727038821031246850. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.080123795s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-534822
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-534822
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=18273cc699fc357fbf4f93654efe4966698a9f22
	                    minikube.k8s.io/name=old-k8s-version-534822
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_13T22_01_10_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 13 Oct 2025 22:01:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-534822
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 13 Oct 2025 22:02:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 13 Oct 2025 22:02:46 +0000   Mon, 13 Oct 2025 22:01:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 13 Oct 2025 22:02:46 +0000   Mon, 13 Oct 2025 22:01:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 13 Oct 2025 22:02:46 +0000   Mon, 13 Oct 2025 22:01:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 13 Oct 2025 22:02:46 +0000   Mon, 13 Oct 2025 22:01:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    old-k8s-version-534822
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863444Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863444Ki
	  pods:               110
	System Info:
	  Machine ID:                 66f4c9907daa2bafbf78246f68ed04ba
	  System UUID:                6dba6f53-90ba-4da3-b3ef-d819199a3aeb
	  Boot ID:                    981b04c4-08c1-4321-af44-0fa889f5f1d8
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         85s
	  kube-system                 coredns-5dd5756b68-wx29h                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     101s
	  kube-system                 etcd-old-k8s-version-534822                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         114s
	  kube-system                 kindnet-snc6w                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      102s
	  kube-system                 kube-apiserver-old-k8s-version-534822             250m (3%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-controller-manager-old-k8s-version-534822    200m (2%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-proxy-dvt68                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  kube-system                 kube-scheduler-old-k8s-version-534822             100m (1%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         101s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-jfmmb        0 (0%)        0 (0%)      0 (0%)           0 (0%)         36s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-85qc8             0 (0%)        0 (0%)      0 (0%)           0 (0%)         36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 101s               kube-proxy       
	  Normal  Starting                 47s                kube-proxy       
	  Normal  Starting                 2m                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m (x8 over 2m)    kubelet          Node old-k8s-version-534822 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m (x8 over 2m)    kubelet          Node old-k8s-version-534822 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m (x8 over 2m)    kubelet          Node old-k8s-version-534822 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    114s               kubelet          Node old-k8s-version-534822 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  114s               kubelet          Node old-k8s-version-534822 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     114s               kubelet          Node old-k8s-version-534822 status is now: NodeHasSufficientPID
	  Normal  Starting                 114s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           102s               node-controller  Node old-k8s-version-534822 event: Registered Node old-k8s-version-534822 in Controller
	  Normal  NodeReady                88s                kubelet          Node old-k8s-version-534822 status is now: NodeReady
	  Normal  Starting                 51s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  50s (x8 over 51s)  kubelet          Node old-k8s-version-534822 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    50s (x8 over 51s)  kubelet          Node old-k8s-version-534822 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     50s (x8 over 51s)  kubelet          Node old-k8s-version-534822 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           36s                node-controller  Node old-k8s-version-534822 event: Registered Node old-k8s-version-534822 in Controller
	
	
	==> dmesg <==
	[  +0.099627] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.027042] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.612816] kauditd_printk_skb: 47 callbacks suppressed
	[Oct13 21:21] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +1.026013] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +1.023870] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +1.023931] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +1.023844] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +1.023883] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +2.047734] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +4.031606] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +8.511203] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[ +16.382178] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[Oct13 21:22] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	
	
	==> etcd [fef3d5bba99429e04d8f13cbaad68788e9213e26c246beaa2f1d3bea2b92c9f2] <==
	{"level":"info","ts":"2025-10-13T22:02:14.593539Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-13T22:02:14.593572Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-13T22:02:14.595157Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-13T22:02:14.595225Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-13T22:02:14.595237Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-13T22:02:14.59622Z","caller":"etcdserver/server.go:738","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"f23060b075c4c089","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2025-10-13T22:02:14.597034Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-10-13T22:02:14.597161Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-10-13T22:02:14.596969Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-13T22:02:14.597983Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"f23060b075c4c089","initial-advertise-peer-urls":["https://192.168.103.2:2380"],"listen-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.103.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-13T22:02:14.598082Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-13T22:02:14.881331Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-13T22:02:14.881427Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-13T22:02:14.881668Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-10-13T22:02:14.881728Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became candidate at term 3"}
	{"level":"info","ts":"2025-10-13T22:02:14.881746Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-10-13T22:02:14.881771Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became leader at term 3"}
	{"level":"info","ts":"2025-10-13T22:02:14.881786Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-10-13T22:02:14.884189Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:old-k8s-version-534822 ClientURLs:[https://192.168.103.2:2379]}","request-path":"/0/members/f23060b075c4c089/attributes","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-13T22:02:14.884226Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-13T22:02:14.884346Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-13T22:02:14.884568Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-13T22:02:14.886629Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	{"level":"info","ts":"2025-10-13T22:02:14.886652Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-13T22:02:14.886485Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 22:03:05 up  1:45,  0 user,  load average: 3.56, 3.32, 5.91
	Linux old-k8s-version-534822 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e04dc6fa107a5c56236b3d443172131ce65ded3d8adf9775024f6a49e9772e8e] <==
	I1013 22:02:17.536966       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1013 22:02:17.537231       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1013 22:02:17.537370       1 main.go:148] setting mtu 1500 for CNI 
	I1013 22:02:17.537389       1 main.go:178] kindnetd IP family: "ipv4"
	I1013 22:02:17.537415       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-13T22:02:17Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1013 22:02:17.739079       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1013 22:02:17.739114       1 controller.go:381] "Waiting for informer caches to sync"
	I1013 22:02:17.739153       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1013 22:02:17.739278       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1013 22:02:18.316484       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1013 22:02:18.316515       1 metrics.go:72] Registering metrics
	I1013 22:02:18.316616       1 controller.go:711] "Syncing nftables rules"
	I1013 22:02:27.832568       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1013 22:02:27.832651       1 main.go:301] handling current node
	I1013 22:02:37.832308       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1013 22:02:37.832345       1 main.go:301] handling current node
	I1013 22:02:47.832399       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1013 22:02:47.832434       1 main.go:301] handling current node
	I1013 22:02:57.832325       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1013 22:02:57.832372       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a8a18841fbef49205a8df405497d96c6bb674b58aa7107bc74083ff4a27bf0db] <==
	I1013 22:02:16.043869       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1013 22:02:16.044158       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1013 22:02:16.044195       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1013 22:02:16.044315       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1013 22:02:16.044352       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1013 22:02:16.044424       1 shared_informer.go:318] Caches are synced for configmaps
	I1013 22:02:16.044608       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1013 22:02:16.044819       1 aggregator.go:166] initial CRD sync complete...
	I1013 22:02:16.044855       1 autoregister_controller.go:141] Starting autoregister controller
	I1013 22:02:16.044886       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1013 22:02:16.044910       1 cache.go:39] Caches are synced for autoregister controller
	E1013 22:02:16.053528       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1013 22:02:16.102017       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1013 22:02:16.865459       1 controller.go:624] quota admission added evaluator for: namespaces
	I1013 22:02:16.895563       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1013 22:02:16.912475       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1013 22:02:16.919412       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1013 22:02:16.925773       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1013 22:02:16.946700       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1013 22:02:16.959894       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.252.244"}
	I1013 22:02:16.971358       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.117.218"}
	I1013 22:02:28.270216       1 controller.go:624] quota admission added evaluator for: endpoints
	I1013 22:02:28.619346       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1013 22:02:28.619348       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1013 22:02:28.768450       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [8f0311ea43bb503a3f6cef3444dce8ce4614329582f6cd4bd7b1f02c9bf17bb2] <==
	I1013 22:02:28.723554       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="79.099µs"
	I1013 22:02:28.772027       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-5f989dc9cf to 1"
	I1013 22:02:28.773613       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8694d4445c to 1"
	I1013 22:02:28.779052       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-85qc8"
	I1013 22:02:28.779080       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-jfmmb"
	I1013 22:02:28.785785       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="12.475139ms"
	I1013 22:02:28.787779       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="15.950718ms"
	I1013 22:02:28.791397       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="5.556885ms"
	I1013 22:02:28.791474       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="44.068µs"
	I1013 22:02:28.798215       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="10.326788ms"
	I1013 22:02:28.798295       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="46.057µs"
	I1013 22:02:28.799490       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="59.093µs"
	I1013 22:02:28.808073       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="79.314µs"
	I1013 22:02:28.817075       1 shared_informer.go:318] Caches are synced for garbage collector
	I1013 22:02:28.837262       1 shared_informer.go:318] Caches are synced for garbage collector
	I1013 22:02:28.837290       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1013 22:02:34.094404       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="7.740141ms"
	I1013 22:02:34.094503       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="60.08µs"
	I1013 22:02:36.089546       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="51.401µs"
	I1013 22:02:37.095942       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="71.217µs"
	I1013 22:02:38.102117       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="122.308µs"
	I1013 22:02:47.934438       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.838456ms"
	I1013 22:02:47.934735       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="152.975µs"
	I1013 22:02:52.143521       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="71.071µs"
	I1013 22:02:53.145465       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="54.403µs"
	
	
	==> kube-proxy [c5ef5eaa114969042b86e33d9108fd252b477bdb7ed4ddd8c2f43db87e5079a9] <==
	I1013 22:02:17.407048       1 server_others.go:69] "Using iptables proxy"
	I1013 22:02:17.417090       1 node.go:141] Successfully retrieved node IP: 192.168.103.2
	I1013 22:02:17.434239       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1013 22:02:17.436422       1 server_others.go:152] "Using iptables Proxier"
	I1013 22:02:17.436455       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1013 22:02:17.436461       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1013 22:02:17.436483       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1013 22:02:17.436744       1 server.go:846] "Version info" version="v1.28.0"
	I1013 22:02:17.436764       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 22:02:17.437320       1 config.go:97] "Starting endpoint slice config controller"
	I1013 22:02:17.437342       1 config.go:315] "Starting node config controller"
	I1013 22:02:17.437354       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1013 22:02:17.437343       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1013 22:02:17.437320       1 config.go:188] "Starting service config controller"
	I1013 22:02:17.437641       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1013 22:02:17.537509       1 shared_informer.go:318] Caches are synced for node config
	I1013 22:02:17.538709       1 shared_informer.go:318] Caches are synced for service config
	I1013 22:02:17.538743       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [90f9e9007916b0a8ae74e840abbcb9cbfc1ce8e26a1eb71f02c223f888d9a6d6] <==
	I1013 22:02:15.896916       1 serving.go:348] Generated self-signed cert in-memory
	I1013 22:02:16.611967       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1013 22:02:16.612026       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 22:02:16.616712       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 22:02:16.616747       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1013 22:02:16.616765       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1013 22:02:16.616791       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1013 22:02:16.616714       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1013 22:02:16.616841       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1013 22:02:16.617942       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1013 22:02:16.618023       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1013 22:02:16.717310       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1013 22:02:16.717325       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1013 22:02:16.717325       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 13 22:02:28 old-k8s-version-534822 kubelet[721]: I1013 22:02:28.884267     721 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/0023517f-8e99-45ca-9130-c16e98edc916-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-85qc8\" (UID: \"0023517f-8e99-45ca-9130-c16e98edc916\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-85qc8"
	Oct 13 22:02:28 old-k8s-version-534822 kubelet[721]: I1013 22:02:28.884313     721 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/341907f2-6d9a-47df-ab82-6edcac7cba80-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-jfmmb\" (UID: \"341907f2-6d9a-47df-ab82-6edcac7cba80\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jfmmb"
	Oct 13 22:02:34 old-k8s-version-534822 kubelet[721]: I1013 22:02:34.085899     721 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-85qc8" podStartSLOduration=1.724950855 podCreationTimestamp="2025-10-13 22:02:28 +0000 UTC" firstStartedPulling="2025-10-13 22:02:29.111749348 +0000 UTC m=+15.210262609" lastFinishedPulling="2025-10-13 22:02:33.472635952 +0000 UTC m=+19.571149202" observedRunningTime="2025-10-13 22:02:34.085095913 +0000 UTC m=+20.183609175" watchObservedRunningTime="2025-10-13 22:02:34.085837448 +0000 UTC m=+20.184350710"
	Oct 13 22:02:36 old-k8s-version-534822 kubelet[721]: I1013 22:02:36.078980     721 scope.go:117] "RemoveContainer" containerID="9714b7008811138922472aa2965eb8263afeb46dd4e028180835324f0267bd1c"
	Oct 13 22:02:37 old-k8s-version-534822 kubelet[721]: I1013 22:02:37.084099     721 scope.go:117] "RemoveContainer" containerID="9714b7008811138922472aa2965eb8263afeb46dd4e028180835324f0267bd1c"
	Oct 13 22:02:37 old-k8s-version-534822 kubelet[721]: I1013 22:02:37.084267     721 scope.go:117] "RemoveContainer" containerID="5d904bfd1c88bf4d4541a63d27bc753cbcaa659f72584f5913675e8900ba16fc"
	Oct 13 22:02:37 old-k8s-version-534822 kubelet[721]: E1013 22:02:37.084622     721 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-jfmmb_kubernetes-dashboard(341907f2-6d9a-47df-ab82-6edcac7cba80)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jfmmb" podUID="341907f2-6d9a-47df-ab82-6edcac7cba80"
	Oct 13 22:02:38 old-k8s-version-534822 kubelet[721]: I1013 22:02:38.090548     721 scope.go:117] "RemoveContainer" containerID="5d904bfd1c88bf4d4541a63d27bc753cbcaa659f72584f5913675e8900ba16fc"
	Oct 13 22:02:38 old-k8s-version-534822 kubelet[721]: E1013 22:02:38.090829     721 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-jfmmb_kubernetes-dashboard(341907f2-6d9a-47df-ab82-6edcac7cba80)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jfmmb" podUID="341907f2-6d9a-47df-ab82-6edcac7cba80"
	Oct 13 22:02:39 old-k8s-version-534822 kubelet[721]: I1013 22:02:39.093106     721 scope.go:117] "RemoveContainer" containerID="5d904bfd1c88bf4d4541a63d27bc753cbcaa659f72584f5913675e8900ba16fc"
	Oct 13 22:02:39 old-k8s-version-534822 kubelet[721]: E1013 22:02:39.093386     721 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-jfmmb_kubernetes-dashboard(341907f2-6d9a-47df-ab82-6edcac7cba80)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jfmmb" podUID="341907f2-6d9a-47df-ab82-6edcac7cba80"
	Oct 13 22:02:48 old-k8s-version-534822 kubelet[721]: I1013 22:02:48.115910     721 scope.go:117] "RemoveContainer" containerID="b624ac084d77afef6c81464d48d1eb794d43f2a9198b78ebfa5018b74a539084"
	Oct 13 22:02:51 old-k8s-version-534822 kubelet[721]: I1013 22:02:51.000244     721 scope.go:117] "RemoveContainer" containerID="5d904bfd1c88bf4d4541a63d27bc753cbcaa659f72584f5913675e8900ba16fc"
	Oct 13 22:02:52 old-k8s-version-534822 kubelet[721]: I1013 22:02:52.132501     721 scope.go:117] "RemoveContainer" containerID="5d904bfd1c88bf4d4541a63d27bc753cbcaa659f72584f5913675e8900ba16fc"
	Oct 13 22:02:52 old-k8s-version-534822 kubelet[721]: I1013 22:02:52.132702     721 scope.go:117] "RemoveContainer" containerID="24a60a5877551a0b14faf87f3bd9b57fc758102f99a010ec769dd51aefc1de46"
	Oct 13 22:02:52 old-k8s-version-534822 kubelet[721]: E1013 22:02:52.133118     721 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-jfmmb_kubernetes-dashboard(341907f2-6d9a-47df-ab82-6edcac7cba80)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jfmmb" podUID="341907f2-6d9a-47df-ab82-6edcac7cba80"
	Oct 13 22:02:53 old-k8s-version-534822 kubelet[721]: I1013 22:02:53.136200     721 scope.go:117] "RemoveContainer" containerID="24a60a5877551a0b14faf87f3bd9b57fc758102f99a010ec769dd51aefc1de46"
	Oct 13 22:02:53 old-k8s-version-534822 kubelet[721]: E1013 22:02:53.136448     721 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-jfmmb_kubernetes-dashboard(341907f2-6d9a-47df-ab82-6edcac7cba80)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jfmmb" podUID="341907f2-6d9a-47df-ab82-6edcac7cba80"
	Oct 13 22:02:59 old-k8s-version-534822 kubelet[721]: I1013 22:02:59.090410     721 scope.go:117] "RemoveContainer" containerID="24a60a5877551a0b14faf87f3bd9b57fc758102f99a010ec769dd51aefc1de46"
	Oct 13 22:02:59 old-k8s-version-534822 kubelet[721]: E1013 22:02:59.090701     721 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-jfmmb_kubernetes-dashboard(341907f2-6d9a-47df-ab82-6edcac7cba80)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jfmmb" podUID="341907f2-6d9a-47df-ab82-6edcac7cba80"
	Oct 13 22:03:01 old-k8s-version-534822 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 13 22:03:01 old-k8s-version-534822 kubelet[721]: I1013 22:03:01.697209     721 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 13 22:03:01 old-k8s-version-534822 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 13 22:03:01 old-k8s-version-534822 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 13 22:03:01 old-k8s-version-534822 systemd[1]: kubelet.service: Consumed 1.416s CPU time.
	
	
	==> kubernetes-dashboard [bd7cae91130f04be30cc57b1982ae36832e3a5f9220822a6aef22201699250b7] <==
	2025/10/13 22:02:33 Using namespace: kubernetes-dashboard
	2025/10/13 22:02:33 Using in-cluster config to connect to apiserver
	2025/10/13 22:02:33 Using secret token for csrf signing
	2025/10/13 22:02:33 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/13 22:02:33 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/13 22:02:33 Successful initial request to the apiserver, version: v1.28.0
	2025/10/13 22:02:33 Generating JWE encryption key
	2025/10/13 22:02:33 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/13 22:02:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/13 22:02:33 Initializing JWE encryption key from synchronized object
	2025/10/13 22:02:33 Creating in-cluster Sidecar client
	2025/10/13 22:02:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/13 22:02:33 Serving insecurely on HTTP port: 9090
	2025/10/13 22:03:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/13 22:02:33 Starting overwatch
	
	
	==> storage-provisioner [37228398f0d5b8da9cd2c42cbd3f96b5b2291545f591979cceded9621f58cafc] <==
	I1013 22:02:48.161695       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1013 22:02:48.169724       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1013 22:02:48.169784       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	
	
	==> storage-provisioner [b624ac084d77afef6c81464d48d1eb794d43f2a9198b78ebfa5018b74a539084] <==
	I1013 22:02:17.378570       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1013 22:02:47.382429       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-534822 -n old-k8s-version-534822
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-534822 -n old-k8s-version-534822: exit status 2 (349.85543ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-534822 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-534822
helpers_test.go:243: (dbg) docker inspect old-k8s-version-534822:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "cebe2b59b715e11d4a1d157870b568e5326119ff8587e8bafeeae046fb0d5ef4",
	        "Created": "2025-10-13T22:00:56.40821218Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 464639,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-13T22:02:07.261599258Z",
	            "FinishedAt": "2025-10-13T22:02:06.431762942Z"
	        },
	        "Image": "sha256:496f866fde942b6f85dc3d23428f27ed11a5cd69b522133c43b8dc97e7575c9e",
	        "ResolvConfPath": "/var/lib/docker/containers/cebe2b59b715e11d4a1d157870b568e5326119ff8587e8bafeeae046fb0d5ef4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/cebe2b59b715e11d4a1d157870b568e5326119ff8587e8bafeeae046fb0d5ef4/hostname",
	        "HostsPath": "/var/lib/docker/containers/cebe2b59b715e11d4a1d157870b568e5326119ff8587e8bafeeae046fb0d5ef4/hosts",
	        "LogPath": "/var/lib/docker/containers/cebe2b59b715e11d4a1d157870b568e5326119ff8587e8bafeeae046fb0d5ef4/cebe2b59b715e11d4a1d157870b568e5326119ff8587e8bafeeae046fb0d5ef4-json.log",
	        "Name": "/old-k8s-version-534822",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-534822:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-534822",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "cebe2b59b715e11d4a1d157870b568e5326119ff8587e8bafeeae046fb0d5ef4",
	                "LowerDir": "/var/lib/docker/overlay2/a3eced189884b262317386087129a706fd41bab22a49fa1875ac763be6612488-init/diff:/var/lib/docker/overlay2/d6236b573fee274727d414fe7dfb0718c2f1a4b8ebed995b4196b3231a8d31a4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a3eced189884b262317386087129a706fd41bab22a49fa1875ac763be6612488/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a3eced189884b262317386087129a706fd41bab22a49fa1875ac763be6612488/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a3eced189884b262317386087129a706fd41bab22a49fa1875ac763be6612488/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-534822",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-534822/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-534822",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-534822",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-534822",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fa70e4e266e2d0cb1159049be83189903786428371915674620b0ef8805a0e9c",
	            "SandboxKey": "/var/run/docker/netns/fa70e4e266e2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33063"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33064"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33067"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33065"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33066"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-534822": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:6f:c3:e0:5f:31",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4d1498e7b1a230857c86022c34281ff31ff5a8fd51b2621fd4063f6a1e47ae63",
	                    "EndpointID": "d5d66a37f5ee7bf00ef8e83eda6b4dd34854594c06e449813f0e3467343431c4",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-534822",
	                        "cebe2b59b715"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-534822 -n old-k8s-version-534822
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-534822 -n old-k8s-version-534822: exit status 2 (349.48251ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-534822 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-534822 logs -n 25: (1.353113901s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-200102 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-200102             │ jenkins │ v1.37.0 │ 13 Oct 25 22:00 UTC │                     │
	│ ssh     │ -p cilium-200102 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-200102             │ jenkins │ v1.37.0 │ 13 Oct 25 22:00 UTC │                     │
	│ ssh     │ -p cilium-200102 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-200102             │ jenkins │ v1.37.0 │ 13 Oct 25 22:00 UTC │                     │
	│ ssh     │ -p cilium-200102 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-200102             │ jenkins │ v1.37.0 │ 13 Oct 25 22:00 UTC │                     │
	│ ssh     │ -p cilium-200102 sudo containerd config dump                                                                                                                                                                                                  │ cilium-200102             │ jenkins │ v1.37.0 │ 13 Oct 25 22:00 UTC │                     │
	│ ssh     │ -p cilium-200102 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-200102             │ jenkins │ v1.37.0 │ 13 Oct 25 22:00 UTC │                     │
	│ ssh     │ -p cilium-200102 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-200102             │ jenkins │ v1.37.0 │ 13 Oct 25 22:00 UTC │                     │
	│ ssh     │ -p cilium-200102 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-200102             │ jenkins │ v1.37.0 │ 13 Oct 25 22:00 UTC │                     │
	│ ssh     │ -p cilium-200102 sudo crio config                                                                                                                                                                                                             │ cilium-200102             │ jenkins │ v1.37.0 │ 13 Oct 25 22:00 UTC │                     │
	│ delete  │ -p cilium-200102                                                                                                                                                                                                                              │ cilium-200102             │ jenkins │ v1.37.0 │ 13 Oct 25 22:00 UTC │ 13 Oct 25 22:00 UTC │
	│ start   │ -p old-k8s-version-534822 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-534822    │ jenkins │ v1.37.0 │ 13 Oct 25 22:00 UTC │ 13 Oct 25 22:01 UTC │
	│ delete  │ -p force-systemd-env-010902                                                                                                                                                                                                                   │ force-systemd-env-010902  │ jenkins │ v1.37.0 │ 13 Oct 25 22:01 UTC │ 13 Oct 25 22:01 UTC │
	│ start   │ -p no-preload-080337 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-080337         │ jenkins │ v1.37.0 │ 13 Oct 25 22:01 UTC │ 13 Oct 25 22:02 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-534822 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-534822    │ jenkins │ v1.37.0 │ 13 Oct 25 22:01 UTC │                     │
	│ stop    │ -p old-k8s-version-534822 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-534822    │ jenkins │ v1.37.0 │ 13 Oct 25 22:01 UTC │ 13 Oct 25 22:02 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-534822 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-534822    │ jenkins │ v1.37.0 │ 13 Oct 25 22:02 UTC │ 13 Oct 25 22:02 UTC │
	│ start   │ -p old-k8s-version-534822 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-534822    │ jenkins │ v1.37.0 │ 13 Oct 25 22:02 UTC │ 13 Oct 25 22:02 UTC │
	│ addons  │ enable metrics-server -p no-preload-080337 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-080337         │ jenkins │ v1.37.0 │ 13 Oct 25 22:02 UTC │                     │
	│ stop    │ -p no-preload-080337 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-080337         │ jenkins │ v1.37.0 │ 13 Oct 25 22:02 UTC │ 13 Oct 25 22:02 UTC │
	│ addons  │ enable dashboard -p no-preload-080337 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-080337         │ jenkins │ v1.37.0 │ 13 Oct 25 22:02 UTC │ 13 Oct 25 22:02 UTC │
	│ start   │ -p no-preload-080337 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-080337         │ jenkins │ v1.37.0 │ 13 Oct 25 22:02 UTC │                     │
	│ image   │ old-k8s-version-534822 image list --format=json                                                                                                                                                                                               │ old-k8s-version-534822    │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │ 13 Oct 25 22:03 UTC │
	│ pause   │ -p old-k8s-version-534822 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-534822    │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │                     │
	│ start   │ -p kubernetes-upgrade-050146 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-050146 │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │                     │
	│ start   │ -p kubernetes-upgrade-050146 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-050146 │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 22:03:05
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 22:03:05.896307  474064 out.go:360] Setting OutFile to fd 1 ...
	I1013 22:03:05.896551  474064 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:03:05.896561  474064 out.go:374] Setting ErrFile to fd 2...
	I1013 22:03:05.896565  474064 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:03:05.896816  474064 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-226873/.minikube/bin
	I1013 22:03:05.897423  474064 out.go:368] Setting JSON to false
	I1013 22:03:05.898878  474064 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6334,"bootTime":1760386652,"procs":484,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1013 22:03:05.899029  474064 start.go:141] virtualization: kvm guest
	I1013 22:03:05.900960  474064 out.go:179] * [kubernetes-upgrade-050146] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1013 22:03:05.902428  474064 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 22:03:05.902454  474064 notify.go:220] Checking for updates...
	I1013 22:03:05.905849  474064 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 22:03:05.907203  474064 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-226873/kubeconfig
	I1013 22:03:05.908436  474064 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-226873/.minikube
	I1013 22:03:05.912205  474064 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1013 22:03:05.914000  474064 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 22:03:05.915723  474064 config.go:182] Loaded profile config "kubernetes-upgrade-050146": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:03:05.916234  474064 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 22:03:05.943363  474064 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1013 22:03:05.943482  474064 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 22:03:06.014299  474064 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:76 OomKillDisable:false NGoroutines:85 SystemTime:2025-10-13 22:03:06.00215104 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652166656 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1013 22:03:06.014455  474064 docker.go:318] overlay module found
	I1013 22:03:06.016298  474064 out.go:179] * Using the docker driver based on existing profile
	I1013 22:03:06.017583  474064 start.go:305] selected driver: docker
	I1013 22:03:06.017602  474064 start.go:925] validating driver "docker" against &{Name:kubernetes-upgrade-050146 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-050146 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 22:03:06.017729  474064 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 22:03:06.018369  474064 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 22:03:06.089420  474064 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:76 OomKillDisable:false NGoroutines:85 SystemTime:2025-10-13 22:03:06.076937713 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652166656 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1013 22:03:06.089779  474064 cni.go:84] Creating CNI manager for ""
	I1013 22:03:06.089854  474064 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 22:03:06.089919  474064 start.go:349] cluster config:
	{Name:kubernetes-upgrade-050146 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-050146 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgen
tPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 22:03:06.091842  474064 out.go:179] * Starting "kubernetes-upgrade-050146" primary control-plane node in "kubernetes-upgrade-050146" cluster
	I1013 22:03:06.093307  474064 cache.go:123] Beginning downloading kic base image for docker with crio
	I1013 22:03:06.094597  474064 out.go:179] * Pulling base image v0.0.48-1760363564-21724 ...
	I1013 22:03:06.095830  474064 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 22:03:06.095881  474064 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-226873/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1013 22:03:06.095899  474064 cache.go:58] Caching tarball of preloaded images
	I1013 22:03:06.095955  474064 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon
	I1013 22:03:06.096054  474064 preload.go:233] Found /home/jenkins/minikube-integration/21724-226873/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1013 22:03:06.096071  474064 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1013 22:03:06.096200  474064 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/kubernetes-upgrade-050146/config.json ...
	I1013 22:03:06.118691  474064 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon, skipping pull
	I1013 22:03:06.118716  474064 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 exists in daemon, skipping load
	I1013 22:03:06.118732  474064 cache.go:232] Successfully downloaded all kic artifacts
	I1013 22:03:06.118756  474064 start.go:360] acquireMachinesLock for kubernetes-upgrade-050146: {Name:mkb028a1ddbeb5270971a5ae804d6a2d284cd0f8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 22:03:06.118835  474064 start.go:364] duration metric: took 45.726µs to acquireMachinesLock for "kubernetes-upgrade-050146"
	I1013 22:03:06.118853  474064 start.go:96] Skipping create...Using existing machine configuration
	I1013 22:03:06.118860  474064 fix.go:54] fixHost starting: 
	I1013 22:03:06.119105  474064 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-050146 --format={{.State.Status}}
	I1013 22:03:06.137635  474064 fix.go:112] recreateIfNeeded on kubernetes-upgrade-050146: state=Running err=<nil>
	W1013 22:03:06.137683  474064 fix.go:138] unexpected machine state, will restart: <nil>
	
	
	==> CRI-O <==
	Oct 13 22:02:36 old-k8s-version-534822 crio[561]: time="2025-10-13T22:02:36.12959146Z" level=info msg="Started container" PID=1734 containerID=5d904bfd1c88bf4d4541a63d27bc753cbcaa659f72584f5913675e8900ba16fc description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jfmmb/dashboard-metrics-scraper id=5bccb2ae-13c6-486a-a088-c6435c32e745 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6cad5526c4d24689eda97e66fbf0bdb7054c249724461a791341dcaca24eb38c
	Oct 13 22:02:37 old-k8s-version-534822 crio[561]: time="2025-10-13T22:02:37.08537415Z" level=info msg="Removing container: 9714b7008811138922472aa2965eb8263afeb46dd4e028180835324f0267bd1c" id=9a7a9274-5ee2-4360-aab6-b4ef11897f7b name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 13 22:02:37 old-k8s-version-534822 crio[561]: time="2025-10-13T22:02:37.100054572Z" level=info msg="Removed container 9714b7008811138922472aa2965eb8263afeb46dd4e028180835324f0267bd1c: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jfmmb/dashboard-metrics-scraper" id=9a7a9274-5ee2-4360-aab6-b4ef11897f7b name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 13 22:02:48 old-k8s-version-534822 crio[561]: time="2025-10-13T22:02:48.11642424Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=1810179e-6ed0-484a-9acf-d491d730cd4a name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:02:48 old-k8s-version-534822 crio[561]: time="2025-10-13T22:02:48.117385664Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=d210ef85-3bdb-4c51-a044-44a0059714e6 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:02:48 old-k8s-version-534822 crio[561]: time="2025-10-13T22:02:48.118372655Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=f4a497bf-787b-47b6-b9f2-10765703ba11 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:02:48 old-k8s-version-534822 crio[561]: time="2025-10-13T22:02:48.118675721Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:02:48 old-k8s-version-534822 crio[561]: time="2025-10-13T22:02:48.122986218Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:02:48 old-k8s-version-534822 crio[561]: time="2025-10-13T22:02:48.123217827Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/65d9ce38fe5f67ecc36d1529191bc70690b99903d16fe0b0d6744d477c04f873/merged/etc/passwd: no such file or directory"
	Oct 13 22:02:48 old-k8s-version-534822 crio[561]: time="2025-10-13T22:02:48.123251952Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/65d9ce38fe5f67ecc36d1529191bc70690b99903d16fe0b0d6744d477c04f873/merged/etc/group: no such file or directory"
	Oct 13 22:02:48 old-k8s-version-534822 crio[561]: time="2025-10-13T22:02:48.123570692Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:02:48 old-k8s-version-534822 crio[561]: time="2025-10-13T22:02:48.146117142Z" level=info msg="Created container 37228398f0d5b8da9cd2c42cbd3f96b5b2291545f591979cceded9621f58cafc: kube-system/storage-provisioner/storage-provisioner" id=f4a497bf-787b-47b6-b9f2-10765703ba11 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:02:48 old-k8s-version-534822 crio[561]: time="2025-10-13T22:02:48.146856058Z" level=info msg="Starting container: 37228398f0d5b8da9cd2c42cbd3f96b5b2291545f591979cceded9621f58cafc" id=26259238-6736-4b47-acc1-29f64ca2988f name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 22:02:48 old-k8s-version-534822 crio[561]: time="2025-10-13T22:02:48.148695404Z" level=info msg="Started container" PID=1752 containerID=37228398f0d5b8da9cd2c42cbd3f96b5b2291545f591979cceded9621f58cafc description=kube-system/storage-provisioner/storage-provisioner id=26259238-6736-4b47-acc1-29f64ca2988f name=/runtime.v1.RuntimeService/StartContainer sandboxID=571943ba0d878876d8c48de5a3dee70063b7bc93c99564879691a6d486956f6c
	Oct 13 22:02:51 old-k8s-version-534822 crio[561]: time="2025-10-13T22:02:51.001004573Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=bc46fcf3-a92a-4ed7-8c90-4f8eb90ce15f name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:02:51 old-k8s-version-534822 crio[561]: time="2025-10-13T22:02:51.023723828Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=bab9bb86-e987-493d-b007-bdb5a39ceec2 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:02:51 old-k8s-version-534822 crio[561]: time="2025-10-13T22:02:51.025033181Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jfmmb/dashboard-metrics-scraper" id=a423b561-465d-4b66-8021-6bad57e1235b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:02:51 old-k8s-version-534822 crio[561]: time="2025-10-13T22:02:51.025304434Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:02:51 old-k8s-version-534822 crio[561]: time="2025-10-13T22:02:51.065536426Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:02:51 old-k8s-version-534822 crio[561]: time="2025-10-13T22:02:51.066147984Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:02:51 old-k8s-version-534822 crio[561]: time="2025-10-13T22:02:51.111973631Z" level=info msg="Created container 24a60a5877551a0b14faf87f3bd9b57fc758102f99a010ec769dd51aefc1de46: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jfmmb/dashboard-metrics-scraper" id=a423b561-465d-4b66-8021-6bad57e1235b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:02:51 old-k8s-version-534822 crio[561]: time="2025-10-13T22:02:51.112729116Z" level=info msg="Starting container: 24a60a5877551a0b14faf87f3bd9b57fc758102f99a010ec769dd51aefc1de46" id=ee245e28-6adb-490d-86bf-33abb67fd2ab name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 22:02:51 old-k8s-version-534822 crio[561]: time="2025-10-13T22:02:51.115305853Z" level=info msg="Started container" PID=1785 containerID=24a60a5877551a0b14faf87f3bd9b57fc758102f99a010ec769dd51aefc1de46 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jfmmb/dashboard-metrics-scraper id=ee245e28-6adb-490d-86bf-33abb67fd2ab name=/runtime.v1.RuntimeService/StartContainer sandboxID=6cad5526c4d24689eda97e66fbf0bdb7054c249724461a791341dcaca24eb38c
	Oct 13 22:02:52 old-k8s-version-534822 crio[561]: time="2025-10-13T22:02:52.13374535Z" level=info msg="Removing container: 5d904bfd1c88bf4d4541a63d27bc753cbcaa659f72584f5913675e8900ba16fc" id=b5e20a0a-2e12-44b5-a7eb-a5adb336502d name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 13 22:02:52 old-k8s-version-534822 crio[561]: time="2025-10-13T22:02:52.144784478Z" level=info msg="Removed container 5d904bfd1c88bf4d4541a63d27bc753cbcaa659f72584f5913675e8900ba16fc: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jfmmb/dashboard-metrics-scraper" id=b5e20a0a-2e12-44b5-a7eb-a5adb336502d name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	24a60a5877551       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           15 seconds ago      Exited              dashboard-metrics-scraper   2                   6cad5526c4d24       dashboard-metrics-scraper-5f989dc9cf-jfmmb       kubernetes-dashboard
	37228398f0d5b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           18 seconds ago      Running             storage-provisioner         1                   571943ba0d878       storage-provisioner                              kube-system
	bd7cae91130f0       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   33 seconds ago      Running             kubernetes-dashboard        0                   606c28eefcf3d       kubernetes-dashboard-8694d4445c-85qc8            kubernetes-dashboard
	0babcc5fd2693       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           49 seconds ago      Running             busybox                     1                   617a3124b10b7       busybox                                          default
	c111ec4bcc5b1       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           49 seconds ago      Running             coredns                     0                   84eb710fbef5f       coredns-5dd5756b68-wx29h                         kube-system
	c5ef5eaa11496       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           49 seconds ago      Running             kube-proxy                  0                   b96e4c5e23477       kube-proxy-dvt68                                 kube-system
	b624ac084d77a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           49 seconds ago      Exited              storage-provisioner         0                   571943ba0d878       storage-provisioner                              kube-system
	e04dc6fa107a5       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           49 seconds ago      Running             kindnet-cni                 0                   72151db667390       kindnet-snc6w                                    kube-system
	8f0311ea43bb5       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           52 seconds ago      Running             kube-controller-manager     0                   e94a3fe95484d       kube-controller-manager-old-k8s-version-534822   kube-system
	a8a18841fbef4       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           52 seconds ago      Running             kube-apiserver              0                   be7c7640ae258       kube-apiserver-old-k8s-version-534822            kube-system
	90f9e9007916b       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           52 seconds ago      Running             kube-scheduler              0                   e938191aa40fb       kube-scheduler-old-k8s-version-534822            kube-system
	fef3d5bba9942       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           52 seconds ago      Running             etcd                        0                   d9a644625ff23       etcd-old-k8s-version-534822                      kube-system
	
	
	==> coredns [c111ec4bcc5b125ec48f663bc7cd06e29efb01497a18ce0020efd3eaff6f1fd1] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 25cf5af2951e282c4b0e961a02fb5d3e57c974501832fee92eec17b5135b9ec9d9e87d2ac94e6d117a5ed3dd54e8800aa7b4479706eb54497145ccdb80397d1b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:48964 - 41550 "HINFO IN 6307983208634643048.727038821031246850. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.080123795s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-534822
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-534822
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=18273cc699fc357fbf4f93654efe4966698a9f22
	                    minikube.k8s.io/name=old-k8s-version-534822
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_13T22_01_10_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 13 Oct 2025 22:01:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-534822
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 13 Oct 2025 22:02:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 13 Oct 2025 22:02:46 +0000   Mon, 13 Oct 2025 22:01:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 13 Oct 2025 22:02:46 +0000   Mon, 13 Oct 2025 22:01:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 13 Oct 2025 22:02:46 +0000   Mon, 13 Oct 2025 22:01:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 13 Oct 2025 22:02:46 +0000   Mon, 13 Oct 2025 22:01:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    old-k8s-version-534822
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863444Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863444Ki
	  pods:               110
	System Info:
	  Machine ID:                 66f4c9907daa2bafbf78246f68ed04ba
	  System UUID:                6dba6f53-90ba-4da3-b3ef-d819199a3aeb
	  Boot ID:                    981b04c4-08c1-4321-af44-0fa889f5f1d8
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 coredns-5dd5756b68-wx29h                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     104s
	  kube-system                 etcd-old-k8s-version-534822                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         117s
	  kube-system                 kindnet-snc6w                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      105s
	  kube-system                 kube-apiserver-old-k8s-version-534822             250m (3%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-controller-manager-old-k8s-version-534822    200m (2%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-proxy-dvt68                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 kube-scheduler-old-k8s-version-534822             100m (1%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-jfmmb        0 (0%)        0 (0%)      0 (0%)           0 (0%)         39s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-85qc8             0 (0%)        0 (0%)      0 (0%)           0 (0%)         39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 103s                 kube-proxy       
	  Normal  Starting                 49s                  kube-proxy       
	  Normal  Starting                 2m3s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m3s (x8 over 2m3s)  kubelet          Node old-k8s-version-534822 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m3s (x8 over 2m3s)  kubelet          Node old-k8s-version-534822 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m3s (x8 over 2m3s)  kubelet          Node old-k8s-version-534822 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    117s                 kubelet          Node old-k8s-version-534822 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  117s                 kubelet          Node old-k8s-version-534822 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     117s                 kubelet          Node old-k8s-version-534822 status is now: NodeHasSufficientPID
	  Normal  Starting                 117s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           105s                 node-controller  Node old-k8s-version-534822 event: Registered Node old-k8s-version-534822 in Controller
	  Normal  NodeReady                91s                  kubelet          Node old-k8s-version-534822 status is now: NodeReady
	  Normal  Starting                 54s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  53s (x8 over 54s)    kubelet          Node old-k8s-version-534822 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    53s (x8 over 54s)    kubelet          Node old-k8s-version-534822 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     53s (x8 over 54s)    kubelet          Node old-k8s-version-534822 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           39s                  node-controller  Node old-k8s-version-534822 event: Registered Node old-k8s-version-534822 in Controller
	
	
	==> dmesg <==
	[  +0.099627] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.027042] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.612816] kauditd_printk_skb: 47 callbacks suppressed
	[Oct13 21:21] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +1.026013] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +1.023870] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +1.023931] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +1.023844] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +1.023883] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +2.047734] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +4.031606] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +8.511203] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[ +16.382178] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[Oct13 21:22] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	
	
	==> etcd [fef3d5bba99429e04d8f13cbaad68788e9213e26c246beaa2f1d3bea2b92c9f2] <==
	{"level":"info","ts":"2025-10-13T22:02:14.593539Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-13T22:02:14.593572Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-13T22:02:14.595157Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-13T22:02:14.595225Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-13T22:02:14.595237Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-13T22:02:14.59622Z","caller":"etcdserver/server.go:738","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"f23060b075c4c089","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2025-10-13T22:02:14.597034Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-10-13T22:02:14.597161Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-10-13T22:02:14.596969Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-13T22:02:14.597983Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"f23060b075c4c089","initial-advertise-peer-urls":["https://192.168.103.2:2380"],"listen-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.103.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-13T22:02:14.598082Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-13T22:02:14.881331Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-13T22:02:14.881427Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-13T22:02:14.881668Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-10-13T22:02:14.881728Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became candidate at term 3"}
	{"level":"info","ts":"2025-10-13T22:02:14.881746Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-10-13T22:02:14.881771Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became leader at term 3"}
	{"level":"info","ts":"2025-10-13T22:02:14.881786Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-10-13T22:02:14.884189Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:old-k8s-version-534822 ClientURLs:[https://192.168.103.2:2379]}","request-path":"/0/members/f23060b075c4c089/attributes","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-13T22:02:14.884226Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-13T22:02:14.884346Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-13T22:02:14.884568Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-13T22:02:14.886629Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	{"level":"info","ts":"2025-10-13T22:02:14.886652Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-13T22:02:14.886485Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 22:03:07 up  1:45,  0 user,  load average: 3.56, 3.32, 5.91
	Linux old-k8s-version-534822 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e04dc6fa107a5c56236b3d443172131ce65ded3d8adf9775024f6a49e9772e8e] <==
	I1013 22:02:17.536966       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1013 22:02:17.537231       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1013 22:02:17.537370       1 main.go:148] setting mtu 1500 for CNI 
	I1013 22:02:17.537389       1 main.go:178] kindnetd IP family: "ipv4"
	I1013 22:02:17.537415       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-13T22:02:17Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1013 22:02:17.739079       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1013 22:02:17.739114       1 controller.go:381] "Waiting for informer caches to sync"
	I1013 22:02:17.739153       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1013 22:02:17.739278       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1013 22:02:18.316484       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1013 22:02:18.316515       1 metrics.go:72] Registering metrics
	I1013 22:02:18.316616       1 controller.go:711] "Syncing nftables rules"
	I1013 22:02:27.832568       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1013 22:02:27.832651       1 main.go:301] handling current node
	I1013 22:02:37.832308       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1013 22:02:37.832345       1 main.go:301] handling current node
	I1013 22:02:47.832399       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1013 22:02:47.832434       1 main.go:301] handling current node
	I1013 22:02:57.832325       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1013 22:02:57.832372       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a8a18841fbef49205a8df405497d96c6bb674b58aa7107bc74083ff4a27bf0db] <==
	I1013 22:02:16.043869       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1013 22:02:16.044158       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1013 22:02:16.044195       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1013 22:02:16.044315       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1013 22:02:16.044352       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1013 22:02:16.044424       1 shared_informer.go:318] Caches are synced for configmaps
	I1013 22:02:16.044608       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1013 22:02:16.044819       1 aggregator.go:166] initial CRD sync complete...
	I1013 22:02:16.044855       1 autoregister_controller.go:141] Starting autoregister controller
	I1013 22:02:16.044886       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1013 22:02:16.044910       1 cache.go:39] Caches are synced for autoregister controller
	E1013 22:02:16.053528       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1013 22:02:16.102017       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1013 22:02:16.865459       1 controller.go:624] quota admission added evaluator for: namespaces
	I1013 22:02:16.895563       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1013 22:02:16.912475       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1013 22:02:16.919412       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1013 22:02:16.925773       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1013 22:02:16.946700       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1013 22:02:16.959894       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.252.244"}
	I1013 22:02:16.971358       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.117.218"}
	I1013 22:02:28.270216       1 controller.go:624] quota admission added evaluator for: endpoints
	I1013 22:02:28.619346       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1013 22:02:28.619348       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1013 22:02:28.768450       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [8f0311ea43bb503a3f6cef3444dce8ce4614329582f6cd4bd7b1f02c9bf17bb2] <==
	I1013 22:02:28.723554       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="79.099µs"
	I1013 22:02:28.772027       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-5f989dc9cf to 1"
	I1013 22:02:28.773613       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8694d4445c to 1"
	I1013 22:02:28.779052       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-85qc8"
	I1013 22:02:28.779080       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-jfmmb"
	I1013 22:02:28.785785       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="12.475139ms"
	I1013 22:02:28.787779       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="15.950718ms"
	I1013 22:02:28.791397       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="5.556885ms"
	I1013 22:02:28.791474       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="44.068µs"
	I1013 22:02:28.798215       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="10.326788ms"
	I1013 22:02:28.798295       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="46.057µs"
	I1013 22:02:28.799490       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="59.093µs"
	I1013 22:02:28.808073       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="79.314µs"
	I1013 22:02:28.817075       1 shared_informer.go:318] Caches are synced for garbage collector
	I1013 22:02:28.837262       1 shared_informer.go:318] Caches are synced for garbage collector
	I1013 22:02:28.837290       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1013 22:02:34.094404       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="7.740141ms"
	I1013 22:02:34.094503       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="60.08µs"
	I1013 22:02:36.089546       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="51.401µs"
	I1013 22:02:37.095942       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="71.217µs"
	I1013 22:02:38.102117       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="122.308µs"
	I1013 22:02:47.934438       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.838456ms"
	I1013 22:02:47.934735       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="152.975µs"
	I1013 22:02:52.143521       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="71.071µs"
	I1013 22:02:53.145465       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="54.403µs"
	
	
	==> kube-proxy [c5ef5eaa114969042b86e33d9108fd252b477bdb7ed4ddd8c2f43db87e5079a9] <==
	I1013 22:02:17.407048       1 server_others.go:69] "Using iptables proxy"
	I1013 22:02:17.417090       1 node.go:141] Successfully retrieved node IP: 192.168.103.2
	I1013 22:02:17.434239       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1013 22:02:17.436422       1 server_others.go:152] "Using iptables Proxier"
	I1013 22:02:17.436455       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1013 22:02:17.436461       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1013 22:02:17.436483       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1013 22:02:17.436744       1 server.go:846] "Version info" version="v1.28.0"
	I1013 22:02:17.436764       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 22:02:17.437320       1 config.go:97] "Starting endpoint slice config controller"
	I1013 22:02:17.437342       1 config.go:315] "Starting node config controller"
	I1013 22:02:17.437354       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1013 22:02:17.437343       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1013 22:02:17.437320       1 config.go:188] "Starting service config controller"
	I1013 22:02:17.437641       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1013 22:02:17.537509       1 shared_informer.go:318] Caches are synced for node config
	I1013 22:02:17.538709       1 shared_informer.go:318] Caches are synced for service config
	I1013 22:02:17.538743       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [90f9e9007916b0a8ae74e840abbcb9cbfc1ce8e26a1eb71f02c223f888d9a6d6] <==
	I1013 22:02:15.896916       1 serving.go:348] Generated self-signed cert in-memory
	I1013 22:02:16.611967       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1013 22:02:16.612026       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 22:02:16.616712       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 22:02:16.616747       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1013 22:02:16.616765       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1013 22:02:16.616791       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1013 22:02:16.616714       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1013 22:02:16.616841       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1013 22:02:16.617942       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1013 22:02:16.618023       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1013 22:02:16.717310       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1013 22:02:16.717325       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1013 22:02:16.717325       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 13 22:02:28 old-k8s-version-534822 kubelet[721]: I1013 22:02:28.884267     721 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/0023517f-8e99-45ca-9130-c16e98edc916-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-85qc8\" (UID: \"0023517f-8e99-45ca-9130-c16e98edc916\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-85qc8"
	Oct 13 22:02:28 old-k8s-version-534822 kubelet[721]: I1013 22:02:28.884313     721 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/341907f2-6d9a-47df-ab82-6edcac7cba80-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-jfmmb\" (UID: \"341907f2-6d9a-47df-ab82-6edcac7cba80\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jfmmb"
	Oct 13 22:02:34 old-k8s-version-534822 kubelet[721]: I1013 22:02:34.085899     721 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-85qc8" podStartSLOduration=1.724950855 podCreationTimestamp="2025-10-13 22:02:28 +0000 UTC" firstStartedPulling="2025-10-13 22:02:29.111749348 +0000 UTC m=+15.210262609" lastFinishedPulling="2025-10-13 22:02:33.472635952 +0000 UTC m=+19.571149202" observedRunningTime="2025-10-13 22:02:34.085095913 +0000 UTC m=+20.183609175" watchObservedRunningTime="2025-10-13 22:02:34.085837448 +0000 UTC m=+20.184350710"
	Oct 13 22:02:36 old-k8s-version-534822 kubelet[721]: I1013 22:02:36.078980     721 scope.go:117] "RemoveContainer" containerID="9714b7008811138922472aa2965eb8263afeb46dd4e028180835324f0267bd1c"
	Oct 13 22:02:37 old-k8s-version-534822 kubelet[721]: I1013 22:02:37.084099     721 scope.go:117] "RemoveContainer" containerID="9714b7008811138922472aa2965eb8263afeb46dd4e028180835324f0267bd1c"
	Oct 13 22:02:37 old-k8s-version-534822 kubelet[721]: I1013 22:02:37.084267     721 scope.go:117] "RemoveContainer" containerID="5d904bfd1c88bf4d4541a63d27bc753cbcaa659f72584f5913675e8900ba16fc"
	Oct 13 22:02:37 old-k8s-version-534822 kubelet[721]: E1013 22:02:37.084622     721 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-jfmmb_kubernetes-dashboard(341907f2-6d9a-47df-ab82-6edcac7cba80)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jfmmb" podUID="341907f2-6d9a-47df-ab82-6edcac7cba80"
	Oct 13 22:02:38 old-k8s-version-534822 kubelet[721]: I1013 22:02:38.090548     721 scope.go:117] "RemoveContainer" containerID="5d904bfd1c88bf4d4541a63d27bc753cbcaa659f72584f5913675e8900ba16fc"
	Oct 13 22:02:38 old-k8s-version-534822 kubelet[721]: E1013 22:02:38.090829     721 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-jfmmb_kubernetes-dashboard(341907f2-6d9a-47df-ab82-6edcac7cba80)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jfmmb" podUID="341907f2-6d9a-47df-ab82-6edcac7cba80"
	Oct 13 22:02:39 old-k8s-version-534822 kubelet[721]: I1013 22:02:39.093106     721 scope.go:117] "RemoveContainer" containerID="5d904bfd1c88bf4d4541a63d27bc753cbcaa659f72584f5913675e8900ba16fc"
	Oct 13 22:02:39 old-k8s-version-534822 kubelet[721]: E1013 22:02:39.093386     721 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-jfmmb_kubernetes-dashboard(341907f2-6d9a-47df-ab82-6edcac7cba80)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jfmmb" podUID="341907f2-6d9a-47df-ab82-6edcac7cba80"
	Oct 13 22:02:48 old-k8s-version-534822 kubelet[721]: I1013 22:02:48.115910     721 scope.go:117] "RemoveContainer" containerID="b624ac084d77afef6c81464d48d1eb794d43f2a9198b78ebfa5018b74a539084"
	Oct 13 22:02:51 old-k8s-version-534822 kubelet[721]: I1013 22:02:51.000244     721 scope.go:117] "RemoveContainer" containerID="5d904bfd1c88bf4d4541a63d27bc753cbcaa659f72584f5913675e8900ba16fc"
	Oct 13 22:02:52 old-k8s-version-534822 kubelet[721]: I1013 22:02:52.132501     721 scope.go:117] "RemoveContainer" containerID="5d904bfd1c88bf4d4541a63d27bc753cbcaa659f72584f5913675e8900ba16fc"
	Oct 13 22:02:52 old-k8s-version-534822 kubelet[721]: I1013 22:02:52.132702     721 scope.go:117] "RemoveContainer" containerID="24a60a5877551a0b14faf87f3bd9b57fc758102f99a010ec769dd51aefc1de46"
	Oct 13 22:02:52 old-k8s-version-534822 kubelet[721]: E1013 22:02:52.133118     721 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-jfmmb_kubernetes-dashboard(341907f2-6d9a-47df-ab82-6edcac7cba80)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jfmmb" podUID="341907f2-6d9a-47df-ab82-6edcac7cba80"
	Oct 13 22:02:53 old-k8s-version-534822 kubelet[721]: I1013 22:02:53.136200     721 scope.go:117] "RemoveContainer" containerID="24a60a5877551a0b14faf87f3bd9b57fc758102f99a010ec769dd51aefc1de46"
	Oct 13 22:02:53 old-k8s-version-534822 kubelet[721]: E1013 22:02:53.136448     721 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-jfmmb_kubernetes-dashboard(341907f2-6d9a-47df-ab82-6edcac7cba80)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jfmmb" podUID="341907f2-6d9a-47df-ab82-6edcac7cba80"
	Oct 13 22:02:59 old-k8s-version-534822 kubelet[721]: I1013 22:02:59.090410     721 scope.go:117] "RemoveContainer" containerID="24a60a5877551a0b14faf87f3bd9b57fc758102f99a010ec769dd51aefc1de46"
	Oct 13 22:02:59 old-k8s-version-534822 kubelet[721]: E1013 22:02:59.090701     721 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-jfmmb_kubernetes-dashboard(341907f2-6d9a-47df-ab82-6edcac7cba80)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jfmmb" podUID="341907f2-6d9a-47df-ab82-6edcac7cba80"
	Oct 13 22:03:01 old-k8s-version-534822 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 13 22:03:01 old-k8s-version-534822 kubelet[721]: I1013 22:03:01.697209     721 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 13 22:03:01 old-k8s-version-534822 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 13 22:03:01 old-k8s-version-534822 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 13 22:03:01 old-k8s-version-534822 systemd[1]: kubelet.service: Consumed 1.416s CPU time.
	
	
	==> kubernetes-dashboard [bd7cae91130f04be30cc57b1982ae36832e3a5f9220822a6aef22201699250b7] <==
	2025/10/13 22:02:33 Using namespace: kubernetes-dashboard
	2025/10/13 22:02:33 Using in-cluster config to connect to apiserver
	2025/10/13 22:02:33 Using secret token for csrf signing
	2025/10/13 22:02:33 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/13 22:02:33 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/13 22:02:33 Successful initial request to the apiserver, version: v1.28.0
	2025/10/13 22:02:33 Generating JWE encryption key
	2025/10/13 22:02:33 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/13 22:02:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/13 22:02:33 Initializing JWE encryption key from synchronized object
	2025/10/13 22:02:33 Creating in-cluster Sidecar client
	2025/10/13 22:02:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/13 22:02:33 Serving insecurely on HTTP port: 9090
	2025/10/13 22:03:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/13 22:02:33 Starting overwatch
	
	
	==> storage-provisioner [37228398f0d5b8da9cd2c42cbd3f96b5b2291545f591979cceded9621f58cafc] <==
	I1013 22:02:48.161695       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1013 22:02:48.169724       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1013 22:02:48.169784       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1013 22:03:05.567541       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1013 22:03:05.567721       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b441445e-9dde-4324-afa9-3eced6881d1d", APIVersion:"v1", ResourceVersion:"655", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-534822_dc27a38a-7c95-46d6-b5d1-c732bb07acd0 became leader
	I1013 22:03:05.567791       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-534822_dc27a38a-7c95-46d6-b5d1-c732bb07acd0!
	I1013 22:03:05.668699       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-534822_dc27a38a-7c95-46d6-b5d1-c732bb07acd0!
	
	
	==> storage-provisioner [b624ac084d77afef6c81464d48d1eb794d43f2a9198b78ebfa5018b74a539084] <==
	I1013 22:02:17.378570       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1013 22:02:47.382429       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-534822 -n old-k8s-version-534822
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-534822 -n old-k8s-version-534822: exit status 2 (351.809196ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-534822 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (6.85s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (6.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-080337 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p no-preload-080337 --alsologtostderr -v=1: exit status 80 (1.906690894s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-080337 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 22:03:30.169604  481350 out.go:360] Setting OutFile to fd 1 ...
	I1013 22:03:30.169876  481350 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:03:30.169886  481350 out.go:374] Setting ErrFile to fd 2...
	I1013 22:03:30.169890  481350 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:03:30.170164  481350 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-226873/.minikube/bin
	I1013 22:03:30.170430  481350 out.go:368] Setting JSON to false
	I1013 22:03:30.170484  481350 mustload.go:65] Loading cluster: no-preload-080337
	I1013 22:03:30.170834  481350 config.go:182] Loaded profile config "no-preload-080337": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:03:30.171276  481350 cli_runner.go:164] Run: docker container inspect no-preload-080337 --format={{.State.Status}}
	I1013 22:03:30.193017  481350 host.go:66] Checking if "no-preload-080337" exists ...
	I1013 22:03:30.193366  481350 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 22:03:30.293668  481350 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:80 OomKillDisable:false NGoroutines:87 SystemTime:2025-10-13 22:03:30.281355799 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652166656 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1013 22:03:30.294535  481350 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1758198818-20370/minikube-v1.37.0-1758198818-20370-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1758198818-20370-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-080337 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1013 22:03:30.296857  481350 out.go:179] * Pausing node no-preload-080337 ... 
	I1013 22:03:30.298127  481350 host.go:66] Checking if "no-preload-080337" exists ...
	I1013 22:03:30.298369  481350 ssh_runner.go:195] Run: systemctl --version
	I1013 22:03:30.298409  481350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-080337
	I1013 22:03:30.340832  481350 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/no-preload-080337/id_rsa Username:docker}
	I1013 22:03:30.449363  481350 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 22:03:30.468852  481350 pause.go:52] kubelet running: true
	I1013 22:03:30.468949  481350 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1013 22:03:30.708495  481350 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1013 22:03:30.708593  481350 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1013 22:03:30.787267  481350 cri.go:89] found id: "a2800e4594ddbdd381e3a3e55fb92350f657478bba273f9ed6e919eaf04046e4"
	I1013 22:03:30.787296  481350 cri.go:89] found id: "0a3d791b517ffdd9da09560885e05b173435fc2617cdb09b7a07530db6434db5"
	I1013 22:03:30.787303  481350 cri.go:89] found id: "c11d7ea10ff07c5ab8ae8feca92e0b0aa357520977cf80360fa01049e5b32b5f"
	I1013 22:03:30.787307  481350 cri.go:89] found id: "ca17462b8cc0e8271f720f326aced92a21cf66c7a613241186fd9386088f8ac4"
	I1013 22:03:30.787311  481350 cri.go:89] found id: "171aa5a37278a899b44963bc44d42ebd79c2ac51b6a51f575a8e1e30845ec531"
	I1013 22:03:30.787316  481350 cri.go:89] found id: "148f0bcacf55a43101a10f115e851d44747ab0b0f8fa14a67c8e9715dc66844d"
	I1013 22:03:30.787319  481350 cri.go:89] found id: "db978d7166395383320a2b2c9c28bf365b3b1253da4d608cc691cb890c27b32f"
	I1013 22:03:30.787323  481350 cri.go:89] found id: "3f85644ea5a0b267c7fc78009aa5bfd8d8247edbf9e2e04243d0da00d40977e5"
	I1013 22:03:30.787327  481350 cri.go:89] found id: "09313475387f6d9193c4369e317fc1d49a163fc8159f82148fea73cd3e610424"
	I1013 22:03:30.787343  481350 cri.go:89] found id: "f7a7540b72189df38075c56febc2382f76a3f78677b19a8e85ae274d5d30b6ef"
	I1013 22:03:30.787347  481350 cri.go:89] found id: "ff734f532ee90c978ae4ce5cfb25e9648dbfe2eedcb5f833476bc6ebc32b57e8"
	I1013 22:03:30.787352  481350 cri.go:89] found id: ""
	I1013 22:03:30.787400  481350 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 22:03:30.799873  481350 retry.go:31] will retry after 247.518282ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:03:30Z" level=error msg="open /run/runc: no such file or directory"
	I1013 22:03:31.048235  481350 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 22:03:31.061952  481350 pause.go:52] kubelet running: false
	I1013 22:03:31.062074  481350 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1013 22:03:31.204965  481350 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1013 22:03:31.205103  481350 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1013 22:03:31.280705  481350 cri.go:89] found id: "a2800e4594ddbdd381e3a3e55fb92350f657478bba273f9ed6e919eaf04046e4"
	I1013 22:03:31.280737  481350 cri.go:89] found id: "0a3d791b517ffdd9da09560885e05b173435fc2617cdb09b7a07530db6434db5"
	I1013 22:03:31.280742  481350 cri.go:89] found id: "c11d7ea10ff07c5ab8ae8feca92e0b0aa357520977cf80360fa01049e5b32b5f"
	I1013 22:03:31.280745  481350 cri.go:89] found id: "ca17462b8cc0e8271f720f326aced92a21cf66c7a613241186fd9386088f8ac4"
	I1013 22:03:31.280747  481350 cri.go:89] found id: "171aa5a37278a899b44963bc44d42ebd79c2ac51b6a51f575a8e1e30845ec531"
	I1013 22:03:31.280751  481350 cri.go:89] found id: "148f0bcacf55a43101a10f115e851d44747ab0b0f8fa14a67c8e9715dc66844d"
	I1013 22:03:31.280753  481350 cri.go:89] found id: "db978d7166395383320a2b2c9c28bf365b3b1253da4d608cc691cb890c27b32f"
	I1013 22:03:31.280755  481350 cri.go:89] found id: "3f85644ea5a0b267c7fc78009aa5bfd8d8247edbf9e2e04243d0da00d40977e5"
	I1013 22:03:31.280758  481350 cri.go:89] found id: "09313475387f6d9193c4369e317fc1d49a163fc8159f82148fea73cd3e610424"
	I1013 22:03:31.280771  481350 cri.go:89] found id: "f7a7540b72189df38075c56febc2382f76a3f78677b19a8e85ae274d5d30b6ef"
	I1013 22:03:31.280774  481350 cri.go:89] found id: "ff734f532ee90c978ae4ce5cfb25e9648dbfe2eedcb5f833476bc6ebc32b57e8"
	I1013 22:03:31.280777  481350 cri.go:89] found id: ""
	I1013 22:03:31.280820  481350 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 22:03:31.294227  481350 retry.go:31] will retry after 424.712603ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:03:31Z" level=error msg="open /run/runc: no such file or directory"
	I1013 22:03:31.719582  481350 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 22:03:31.736348  481350 pause.go:52] kubelet running: false
	I1013 22:03:31.736421  481350 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1013 22:03:31.920151  481350 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1013 22:03:31.920281  481350 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1013 22:03:31.998807  481350 cri.go:89] found id: "a2800e4594ddbdd381e3a3e55fb92350f657478bba273f9ed6e919eaf04046e4"
	I1013 22:03:31.998835  481350 cri.go:89] found id: "0a3d791b517ffdd9da09560885e05b173435fc2617cdb09b7a07530db6434db5"
	I1013 22:03:31.998841  481350 cri.go:89] found id: "c11d7ea10ff07c5ab8ae8feca92e0b0aa357520977cf80360fa01049e5b32b5f"
	I1013 22:03:31.998846  481350 cri.go:89] found id: "ca17462b8cc0e8271f720f326aced92a21cf66c7a613241186fd9386088f8ac4"
	I1013 22:03:31.998850  481350 cri.go:89] found id: "171aa5a37278a899b44963bc44d42ebd79c2ac51b6a51f575a8e1e30845ec531"
	I1013 22:03:31.998871  481350 cri.go:89] found id: "148f0bcacf55a43101a10f115e851d44747ab0b0f8fa14a67c8e9715dc66844d"
	I1013 22:03:31.998875  481350 cri.go:89] found id: "db978d7166395383320a2b2c9c28bf365b3b1253da4d608cc691cb890c27b32f"
	I1013 22:03:31.998880  481350 cri.go:89] found id: "3f85644ea5a0b267c7fc78009aa5bfd8d8247edbf9e2e04243d0da00d40977e5"
	I1013 22:03:31.998884  481350 cri.go:89] found id: "09313475387f6d9193c4369e317fc1d49a163fc8159f82148fea73cd3e610424"
	I1013 22:03:31.998892  481350 cri.go:89] found id: "f7a7540b72189df38075c56febc2382f76a3f78677b19a8e85ae274d5d30b6ef"
	I1013 22:03:31.998897  481350 cri.go:89] found id: "ff734f532ee90c978ae4ce5cfb25e9648dbfe2eedcb5f833476bc6ebc32b57e8"
	I1013 22:03:31.998901  481350 cri.go:89] found id: ""
	I1013 22:03:31.998942  481350 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 22:03:32.014102  481350 out.go:203] 
	W1013 22:03:32.015538  481350 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:03:32Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:03:32Z" level=error msg="open /run/runc: no such file or directory"
	
	W1013 22:03:32.015569  481350 out.go:285] * 
	* 
	W1013 22:03:32.021784  481350 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1013 22:03:32.023292  481350 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p no-preload-080337 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-080337
helpers_test.go:243: (dbg) docker inspect no-preload-080337:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "582c4b9df6d8b6770ac76b0dd560241ab3fa5f134c4e1b0fc0ec2cd0d08c34c8",
	        "Created": "2025-10-13T22:01:13.425171095Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 468753,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-13T22:02:31.767892266Z",
	            "FinishedAt": "2025-10-13T22:02:30.778671549Z"
	        },
	        "Image": "sha256:496f866fde942b6f85dc3d23428f27ed11a5cd69b522133c43b8dc97e7575c9e",
	        "ResolvConfPath": "/var/lib/docker/containers/582c4b9df6d8b6770ac76b0dd560241ab3fa5f134c4e1b0fc0ec2cd0d08c34c8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/582c4b9df6d8b6770ac76b0dd560241ab3fa5f134c4e1b0fc0ec2cd0d08c34c8/hostname",
	        "HostsPath": "/var/lib/docker/containers/582c4b9df6d8b6770ac76b0dd560241ab3fa5f134c4e1b0fc0ec2cd0d08c34c8/hosts",
	        "LogPath": "/var/lib/docker/containers/582c4b9df6d8b6770ac76b0dd560241ab3fa5f134c4e1b0fc0ec2cd0d08c34c8/582c4b9df6d8b6770ac76b0dd560241ab3fa5f134c4e1b0fc0ec2cd0d08c34c8-json.log",
	        "Name": "/no-preload-080337",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-080337:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-080337",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "582c4b9df6d8b6770ac76b0dd560241ab3fa5f134c4e1b0fc0ec2cd0d08c34c8",
	                "LowerDir": "/var/lib/docker/overlay2/c471c6160b15e3a21754875e4401849c13d42534f05e08f0d4d88218c5c26bf7-init/diff:/var/lib/docker/overlay2/d6236b573fee274727d414fe7dfb0718c2f1a4b8ebed995b4196b3231a8d31a4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c471c6160b15e3a21754875e4401849c13d42534f05e08f0d4d88218c5c26bf7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c471c6160b15e3a21754875e4401849c13d42534f05e08f0d4d88218c5c26bf7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c471c6160b15e3a21754875e4401849c13d42534f05e08f0d4d88218c5c26bf7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-080337",
	                "Source": "/var/lib/docker/volumes/no-preload-080337/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-080337",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-080337",
	                "name.minikube.sigs.k8s.io": "no-preload-080337",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d5640ab50be6b45d677fea13523620542458dfefc1b549685c4742db3ac5c731",
	            "SandboxKey": "/var/run/docker/netns/d5640ab50be6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33068"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33069"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33072"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33070"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33071"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-080337": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "c6:d2:b1:d8:f2:54",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "023fbfd0e79f229835d49fb4d5f52967eb961e42ade48e5f1189467342508af0",
	                    "EndpointID": "7159af935bc0a7b2fa3d899c89e433e68f46757dec0ffcaa15533f01e3d7b4b3",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-080337",
	                        "582c4b9df6d8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-080337 -n no-preload-080337
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-080337 -n no-preload-080337: exit status 2 (373.300465ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-080337 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-080337 logs -n 25: (1.255742482s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p cilium-200102 sudo crio config                                                                                                                                                                                                             │ cilium-200102                │ jenkins │ v1.37.0 │ 13 Oct 25 22:00 UTC │                     │
	│ delete  │ -p cilium-200102                                                                                                                                                                                                                              │ cilium-200102                │ jenkins │ v1.37.0 │ 13 Oct 25 22:00 UTC │ 13 Oct 25 22:00 UTC │
	│ start   │ -p old-k8s-version-534822 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-534822       │ jenkins │ v1.37.0 │ 13 Oct 25 22:00 UTC │ 13 Oct 25 22:01 UTC │
	│ delete  │ -p force-systemd-env-010902                                                                                                                                                                                                                   │ force-systemd-env-010902     │ jenkins │ v1.37.0 │ 13 Oct 25 22:01 UTC │ 13 Oct 25 22:01 UTC │
	│ start   │ -p no-preload-080337 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-080337            │ jenkins │ v1.37.0 │ 13 Oct 25 22:01 UTC │ 13 Oct 25 22:02 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-534822 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-534822       │ jenkins │ v1.37.0 │ 13 Oct 25 22:01 UTC │                     │
	│ stop    │ -p old-k8s-version-534822 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-534822       │ jenkins │ v1.37.0 │ 13 Oct 25 22:01 UTC │ 13 Oct 25 22:02 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-534822 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-534822       │ jenkins │ v1.37.0 │ 13 Oct 25 22:02 UTC │ 13 Oct 25 22:02 UTC │
	│ start   │ -p old-k8s-version-534822 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-534822       │ jenkins │ v1.37.0 │ 13 Oct 25 22:02 UTC │ 13 Oct 25 22:02 UTC │
	│ addons  │ enable metrics-server -p no-preload-080337 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-080337            │ jenkins │ v1.37.0 │ 13 Oct 25 22:02 UTC │                     │
	│ stop    │ -p no-preload-080337 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-080337            │ jenkins │ v1.37.0 │ 13 Oct 25 22:02 UTC │ 13 Oct 25 22:02 UTC │
	│ addons  │ enable dashboard -p no-preload-080337 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-080337            │ jenkins │ v1.37.0 │ 13 Oct 25 22:02 UTC │ 13 Oct 25 22:02 UTC │
	│ start   │ -p no-preload-080337 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-080337            │ jenkins │ v1.37.0 │ 13 Oct 25 22:02 UTC │ 13 Oct 25 22:03 UTC │
	│ image   │ old-k8s-version-534822 image list --format=json                                                                                                                                                                                               │ old-k8s-version-534822       │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │ 13 Oct 25 22:03 UTC │
	│ pause   │ -p old-k8s-version-534822 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-534822       │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │                     │
	│ start   │ -p kubernetes-upgrade-050146 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-050146    │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │                     │
	│ start   │ -p kubernetes-upgrade-050146 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-050146    │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │ 13 Oct 25 22:03 UTC │
	│ delete  │ -p old-k8s-version-534822                                                                                                                                                                                                                     │ old-k8s-version-534822       │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │ 13 Oct 25 22:03 UTC │
	│ delete  │ -p old-k8s-version-534822                                                                                                                                                                                                                     │ old-k8s-version-534822       │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │ 13 Oct 25 22:03 UTC │
	│ start   │ -p embed-certs-521669 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-521669           │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │                     │
	│ delete  │ -p kubernetes-upgrade-050146                                                                                                                                                                                                                  │ kubernetes-upgrade-050146    │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │ 13 Oct 25 22:03 UTC │
	│ delete  │ -p disable-driver-mounts-659143                                                                                                                                                                                                               │ disable-driver-mounts-659143 │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │ 13 Oct 25 22:03 UTC │
	│ start   │ -p default-k8s-diff-port-505851 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-505851 │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │                     │
	│ image   │ no-preload-080337 image list --format=json                                                                                                                                                                                                    │ no-preload-080337            │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │ 13 Oct 25 22:03 UTC │
	│ pause   │ -p no-preload-080337 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-080337            │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 22:03:15
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 22:03:15.737963  477441 out.go:360] Setting OutFile to fd 1 ...
	I1013 22:03:15.738301  477441 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:03:15.738312  477441 out.go:374] Setting ErrFile to fd 2...
	I1013 22:03:15.738316  477441 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:03:15.738557  477441 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-226873/.minikube/bin
	I1013 22:03:15.739095  477441 out.go:368] Setting JSON to false
	I1013 22:03:15.740395  477441 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6344,"bootTime":1760386652,"procs":473,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1013 22:03:15.740496  477441 start.go:141] virtualization: kvm guest
	I1013 22:03:15.742606  477441 out.go:179] * [default-k8s-diff-port-505851] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1013 22:03:15.744137  477441 notify.go:220] Checking for updates...
	I1013 22:03:15.744144  477441 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 22:03:15.745594  477441 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 22:03:15.747079  477441 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-226873/kubeconfig
	I1013 22:03:15.748294  477441 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-226873/.minikube
	I1013 22:03:15.749547  477441 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1013 22:03:15.750787  477441 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 22:03:15.752693  477441 config.go:182] Loaded profile config "cert-expiration-894101": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:03:15.752798  477441 config.go:182] Loaded profile config "embed-certs-521669": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:03:15.752917  477441 config.go:182] Loaded profile config "no-preload-080337": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:03:15.753060  477441 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 22:03:15.777943  477441 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1013 22:03:15.778093  477441 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 22:03:15.841292  477441 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-13 22:03:15.830505283 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652166656 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1013 22:03:15.841434  477441 docker.go:318] overlay module found
	I1013 22:03:15.844436  477441 out.go:179] * Using the docker driver based on user configuration
	I1013 22:03:15.845889  477441 start.go:305] selected driver: docker
	I1013 22:03:15.845911  477441 start.go:925] validating driver "docker" against <nil>
	I1013 22:03:15.845927  477441 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 22:03:15.846656  477441 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 22:03:15.914386  477441 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:72 OomKillDisable:false NGoroutines:90 SystemTime:2025-10-13 22:03:15.903775663 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652166656 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1013 22:03:15.914648  477441 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1013 22:03:15.914974  477441 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 22:03:15.917001  477441 out.go:179] * Using Docker driver with root privileges
	I1013 22:03:15.918170  477441 cni.go:84] Creating CNI manager for ""
	I1013 22:03:15.918255  477441 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 22:03:15.918272  477441 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1013 22:03:15.918359  477441 start.go:349] cluster config:
	{Name:default-k8s-diff-port-505851 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-505851 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 22:03:15.920053  477441 out.go:179] * Starting "default-k8s-diff-port-505851" primary control-plane node in "default-k8s-diff-port-505851" cluster
	I1013 22:03:15.921500  477441 cache.go:123] Beginning downloading kic base image for docker with crio
	I1013 22:03:15.922806  477441 out.go:179] * Pulling base image v0.0.48-1760363564-21724 ...
	I1013 22:03:15.923852  477441 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 22:03:15.923897  477441 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-226873/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1013 22:03:15.923910  477441 cache.go:58] Caching tarball of preloaded images
	I1013 22:03:15.923969  477441 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon
	I1013 22:03:15.924107  477441 preload.go:233] Found /home/jenkins/minikube-integration/21724-226873/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1013 22:03:15.924126  477441 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1013 22:03:15.924282  477441 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/default-k8s-diff-port-505851/config.json ...
	I1013 22:03:15.924315  477441 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/default-k8s-diff-port-505851/config.json: {Name:mkb4d5a74d02f3a2cdcdf9b4879867af4ffa44af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:03:15.946274  477441 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon, skipping pull
	I1013 22:03:15.946302  477441 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 exists in daemon, skipping load
	I1013 22:03:15.946320  477441 cache.go:232] Successfully downloaded all kic artifacts
	I1013 22:03:15.946355  477441 start.go:360] acquireMachinesLock for default-k8s-diff-port-505851: {Name:mkaf957bc5ced7f5c930a2e33ff0ee7c156af144 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 22:03:15.946463  477441 start.go:364] duration metric: took 87.124µs to acquireMachinesLock for "default-k8s-diff-port-505851"
	I1013 22:03:15.946496  477441 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-505851 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-505851 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1013 22:03:15.946599  477441 start.go:125] createHost starting for "" (driver="docker")
	I1013 22:03:11.432189  476377 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1013 22:03:11.432462  476377 start.go:159] libmachine.API.Create for "embed-certs-521669" (driver="docker")
	I1013 22:03:11.432500  476377 client.go:168] LocalClient.Create starting
	I1013 22:03:11.432577  476377 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem
	I1013 22:03:11.432620  476377 main.go:141] libmachine: Decoding PEM data...
	I1013 22:03:11.432646  476377 main.go:141] libmachine: Parsing certificate...
	I1013 22:03:11.432754  476377 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21724-226873/.minikube/certs/cert.pem
	I1013 22:03:11.432787  476377 main.go:141] libmachine: Decoding PEM data...
	I1013 22:03:11.432801  476377 main.go:141] libmachine: Parsing certificate...
	I1013 22:03:11.433249  476377 cli_runner.go:164] Run: docker network inspect embed-certs-521669 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1013 22:03:11.451243  476377 cli_runner.go:211] docker network inspect embed-certs-521669 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1013 22:03:11.451324  476377 network_create.go:284] running [docker network inspect embed-certs-521669] to gather additional debugging logs...
	I1013 22:03:11.451345  476377 cli_runner.go:164] Run: docker network inspect embed-certs-521669
	W1013 22:03:11.469447  476377 cli_runner.go:211] docker network inspect embed-certs-521669 returned with exit code 1
	I1013 22:03:11.469504  476377 network_create.go:287] error running [docker network inspect embed-certs-521669]: docker network inspect embed-certs-521669: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-521669 not found
	I1013 22:03:11.469533  476377 network_create.go:289] output of [docker network inspect embed-certs-521669]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-521669 not found
	
	** /stderr **
	I1013 22:03:11.469718  476377 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 22:03:11.487501  476377 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7d83a8e6a805 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ea:69:47:54:f9:98} reservation:<nil>}
	I1013 22:03:11.488158  476377 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-35c0cecee577 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:f2:41:bc:f8:12:32} reservation:<nil>}
	I1013 22:03:11.488770  476377 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-2e951fbeb08e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ba:fb:be:51:da:97} reservation:<nil>}
	I1013 22:03:11.489428  476377 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-c946d4d0529a IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:ea:85:25:23:b8:8e} reservation:<nil>}
	I1013 22:03:11.489866  476377 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-41a0a7263ae4 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:06:f3:d9:f6:e7:45} reservation:<nil>}
	I1013 22:03:11.490377  476377 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-023fbfd0e79f IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:9a:52:07:fb:e7:b6} reservation:<nil>}
	I1013 22:03:11.491218  476377 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001fc3720}
	I1013 22:03:11.491245  476377 network_create.go:124] attempt to create docker network embed-certs-521669 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1013 22:03:11.491297  476377 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-521669 embed-certs-521669
	I1013 22:03:11.554361  476377 network_create.go:108] docker network embed-certs-521669 192.168.103.0/24 created
	I1013 22:03:11.554390  476377 kic.go:121] calculated static IP "192.168.103.2" for the "embed-certs-521669" container
	I1013 22:03:11.554461  476377 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1013 22:03:11.574103  476377 cli_runner.go:164] Run: docker volume create embed-certs-521669 --label name.minikube.sigs.k8s.io=embed-certs-521669 --label created_by.minikube.sigs.k8s.io=true
	I1013 22:03:11.593698  476377 oci.go:103] Successfully created a docker volume embed-certs-521669
	I1013 22:03:11.593776  476377 cli_runner.go:164] Run: docker run --rm --name embed-certs-521669-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-521669 --entrypoint /usr/bin/test -v embed-certs-521669:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -d /var/lib
	I1013 22:03:12.027133  476377 oci.go:107] Successfully prepared a docker volume embed-certs-521669
	I1013 22:03:12.027174  476377 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 22:03:12.027196  476377 kic.go:194] Starting extracting preloaded images to volume ...
	I1013 22:03:12.027254  476377 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21724-226873/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-521669:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -I lz4 -xf /preloaded.tar -C /extractDir
	I1013 22:03:15.474512  476377 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21724-226873/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-521669:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -I lz4 -xf /preloaded.tar -C /extractDir: (3.447201807s)
	I1013 22:03:15.474548  476377 kic.go:203] duration metric: took 3.447347241s to extract preloaded images to volume ...
	W1013 22:03:15.474662  476377 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1013 22:03:15.474705  476377 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1013 22:03:15.474753  476377 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1013 22:03:15.537080  476377 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-521669 --name embed-certs-521669 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-521669 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-521669 --network embed-certs-521669 --ip 192.168.103.2 --volume embed-certs-521669:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225
	I1013 22:03:15.828585  476377 cli_runner.go:164] Run: docker container inspect embed-certs-521669 --format={{.State.Running}}
	I1013 22:03:15.849234  476377 cli_runner.go:164] Run: docker container inspect embed-certs-521669 --format={{.State.Status}}
	I1013 22:03:15.870675  476377 cli_runner.go:164] Run: docker exec embed-certs-521669 stat /var/lib/dpkg/alternatives/iptables
	I1013 22:03:15.924712  476377 oci.go:144] the created container "embed-certs-521669" has a running status.
	I1013 22:03:15.924742  476377 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21724-226873/.minikube/machines/embed-certs-521669/id_rsa...
	I1013 22:03:16.078015  476377 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21724-226873/.minikube/machines/embed-certs-521669/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1013 22:03:16.113130  476377 cli_runner.go:164] Run: docker container inspect embed-certs-521669 --format={{.State.Status}}
	I1013 22:03:16.134647  476377 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1013 22:03:16.134676  476377 kic_runner.go:114] Args: [docker exec --privileged embed-certs-521669 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1013 22:03:16.201077  476377 cli_runner.go:164] Run: docker container inspect embed-certs-521669 --format={{.State.Status}}
	W1013 22:03:13.910488  468497 pod_ready.go:104] pod "coredns-66bc5c9577-n6t7s" is not "Ready", error: <nil>
	W1013 22:03:15.910917  468497 pod_ready.go:104] pod "coredns-66bc5c9577-n6t7s" is not "Ready", error: <nil>
	I1013 22:03:16.910657  468497 pod_ready.go:94] pod "coredns-66bc5c9577-n6t7s" is "Ready"
	I1013 22:03:16.910686  468497 pod_ready.go:86] duration metric: took 34.006165322s for pod "coredns-66bc5c9577-n6t7s" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:03:16.913440  468497 pod_ready.go:83] waiting for pod "etcd-no-preload-080337" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:03:16.917936  468497 pod_ready.go:94] pod "etcd-no-preload-080337" is "Ready"
	I1013 22:03:16.917966  468497 pod_ready.go:86] duration metric: took 4.499065ms for pod "etcd-no-preload-080337" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:03:16.920321  468497 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-080337" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:03:16.924156  468497 pod_ready.go:94] pod "kube-apiserver-no-preload-080337" is "Ready"
	I1013 22:03:16.924176  468497 pod_ready.go:86] duration metric: took 3.835719ms for pod "kube-apiserver-no-preload-080337" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:03:16.926302  468497 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-080337" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:03:17.109757  468497 pod_ready.go:94] pod "kube-controller-manager-no-preload-080337" is "Ready"
	I1013 22:03:17.109793  468497 pod_ready.go:86] duration metric: took 183.46409ms for pod "kube-controller-manager-no-preload-080337" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:03:17.309044  468497 pod_ready.go:83] waiting for pod "kube-proxy-2scrx" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:03:17.708022  468497 pod_ready.go:94] pod "kube-proxy-2scrx" is "Ready"
	I1013 22:03:17.708055  468497 pod_ready.go:86] duration metric: took 398.979909ms for pod "kube-proxy-2scrx" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:03:17.908508  468497 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-080337" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:03:18.308756  468497 pod_ready.go:94] pod "kube-scheduler-no-preload-080337" is "Ready"
	I1013 22:03:18.308787  468497 pod_ready.go:86] duration metric: took 400.253383ms for pod "kube-scheduler-no-preload-080337" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:03:18.308803  468497 pod_ready.go:40] duration metric: took 35.407537273s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 22:03:18.364736  468497 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1013 22:03:18.368024  468497 out.go:179] * Done! kubectl is now configured to use "no-preload-080337" cluster and "default" namespace by default
	I1013 22:03:15.952136  477441 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1013 22:03:15.952407  477441 start.go:159] libmachine.API.Create for "default-k8s-diff-port-505851" (driver="docker")
	I1013 22:03:15.952448  477441 client.go:168] LocalClient.Create starting
	I1013 22:03:15.952537  477441 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem
	I1013 22:03:15.952579  477441 main.go:141] libmachine: Decoding PEM data...
	I1013 22:03:15.952609  477441 main.go:141] libmachine: Parsing certificate...
	I1013 22:03:15.952708  477441 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21724-226873/.minikube/certs/cert.pem
	I1013 22:03:15.952739  477441 main.go:141] libmachine: Decoding PEM data...
	I1013 22:03:15.952753  477441 main.go:141] libmachine: Parsing certificate...
	I1013 22:03:15.953187  477441 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-505851 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1013 22:03:15.972246  477441 cli_runner.go:211] docker network inspect default-k8s-diff-port-505851 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1013 22:03:15.972332  477441 network_create.go:284] running [docker network inspect default-k8s-diff-port-505851] to gather additional debugging logs...
	I1013 22:03:15.972356  477441 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-505851
	W1013 22:03:15.996117  477441 cli_runner.go:211] docker network inspect default-k8s-diff-port-505851 returned with exit code 1
	I1013 22:03:15.996179  477441 network_create.go:287] error running [docker network inspect default-k8s-diff-port-505851]: docker network inspect default-k8s-diff-port-505851: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-505851 not found
	I1013 22:03:15.996198  477441 network_create.go:289] output of [docker network inspect default-k8s-diff-port-505851]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-505851 not found
	
	** /stderr **
	I1013 22:03:15.996356  477441 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 22:03:16.016963  477441 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7d83a8e6a805 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ea:69:47:54:f9:98} reservation:<nil>}
	I1013 22:03:16.018030  477441 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-35c0cecee577 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:f2:41:bc:f8:12:32} reservation:<nil>}
	I1013 22:03:16.019112  477441 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-2e951fbeb08e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ba:fb:be:51:da:97} reservation:<nil>}
	I1013 22:03:16.020274  477441 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f5ac90}
	I1013 22:03:16.020302  477441 network_create.go:124] attempt to create docker network default-k8s-diff-port-505851 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1013 22:03:16.020372  477441 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-505851 default-k8s-diff-port-505851
	I1013 22:03:16.089396  477441 network_create.go:108] docker network default-k8s-diff-port-505851 192.168.76.0/24 created
	I1013 22:03:16.089432  477441 kic.go:121] calculated static IP "192.168.76.2" for the "default-k8s-diff-port-505851" container
	I1013 22:03:16.089503  477441 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1013 22:03:16.116271  477441 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-505851 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-505851 --label created_by.minikube.sigs.k8s.io=true
	I1013 22:03:16.139909  477441 oci.go:103] Successfully created a docker volume default-k8s-diff-port-505851
	I1013 22:03:16.140041  477441 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-505851-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-505851 --entrypoint /usr/bin/test -v default-k8s-diff-port-505851:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -d /var/lib
	I1013 22:03:16.606803  477441 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-505851
	I1013 22:03:16.606851  477441 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 22:03:16.606878  477441 kic.go:194] Starting extracting preloaded images to volume ...
	I1013 22:03:16.606961  477441 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21724-226873/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-505851:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -I lz4 -xf /preloaded.tar -C /extractDir
	I1013 22:03:16.221362  476377 machine.go:93] provisionDockerMachine start ...
	I1013 22:03:16.221469  476377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-521669
	I1013 22:03:16.245621  476377 main.go:141] libmachine: Using SSH client type: native
	I1013 22:03:16.245941  476377 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1013 22:03:16.245962  476377 main.go:141] libmachine: About to run SSH command:
	hostname
	I1013 22:03:16.394047  476377 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-521669
	
	I1013 22:03:16.394082  476377 ubuntu.go:182] provisioning hostname "embed-certs-521669"
	I1013 22:03:16.394163  476377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-521669
	I1013 22:03:16.416457  476377 main.go:141] libmachine: Using SSH client type: native
	I1013 22:03:16.416731  476377 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1013 22:03:16.416790  476377 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-521669 && echo "embed-certs-521669" | sudo tee /etc/hostname
	I1013 22:03:16.587752  476377 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-521669
	
	I1013 22:03:16.587863  476377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-521669
	I1013 22:03:16.610262  476377 main.go:141] libmachine: Using SSH client type: native
	I1013 22:03:16.610551  476377 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1013 22:03:16.610573  476377 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-521669' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-521669/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-521669' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1013 22:03:16.755473  476377 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 22:03:16.755504  476377 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21724-226873/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-226873/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-226873/.minikube}
	I1013 22:03:16.755553  476377 ubuntu.go:190] setting up certificates
	I1013 22:03:16.755569  476377 provision.go:84] configureAuth start
	I1013 22:03:16.755641  476377 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-521669
	I1013 22:03:16.775591  476377 provision.go:143] copyHostCerts
	I1013 22:03:16.775664  476377 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-226873/.minikube/ca.pem, removing ...
	I1013 22:03:16.775673  476377 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-226873/.minikube/ca.pem
	I1013 22:03:16.775737  476377 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-226873/.minikube/ca.pem (1078 bytes)
	I1013 22:03:16.775854  476377 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-226873/.minikube/cert.pem, removing ...
	I1013 22:03:16.775868  476377 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-226873/.minikube/cert.pem
	I1013 22:03:16.775898  476377 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-226873/.minikube/cert.pem (1123 bytes)
	I1013 22:03:16.775988  476377 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-226873/.minikube/key.pem, removing ...
	I1013 22:03:16.776013  476377 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-226873/.minikube/key.pem
	I1013 22:03:16.776048  476377 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-226873/.minikube/key.pem (1679 bytes)
	I1013 22:03:16.776176  476377 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-226873/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca-key.pem org=jenkins.embed-certs-521669 san=[127.0.0.1 192.168.103.2 embed-certs-521669 localhost minikube]
	I1013 22:03:17.290608  476377 provision.go:177] copyRemoteCerts
	I1013 22:03:17.290671  476377 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1013 22:03:17.290709  476377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-521669
	I1013 22:03:17.311404  476377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/embed-certs-521669/id_rsa Username:docker}
	I1013 22:03:17.415094  476377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1013 22:03:17.442565  476377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1013 22:03:17.460884  476377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1013 22:03:17.480889  476377 provision.go:87] duration metric: took 725.302266ms to configureAuth
	I1013 22:03:17.480917  476377 ubuntu.go:206] setting minikube options for container-runtime
	I1013 22:03:17.481122  476377 config.go:182] Loaded profile config "embed-certs-521669": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:03:17.481243  476377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-521669
	I1013 22:03:17.500948  476377 main.go:141] libmachine: Using SSH client type: native
	I1013 22:03:17.501305  476377 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1013 22:03:17.501336  476377 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1013 22:03:17.783274  476377 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1013 22:03:17.783307  476377 machine.go:96] duration metric: took 1.561917857s to provisionDockerMachine
	I1013 22:03:17.783317  476377 client.go:171] duration metric: took 6.350807262s to LocalClient.Create
	I1013 22:03:17.783331  476377 start.go:167] duration metric: took 6.350874531s to libmachine.API.Create "embed-certs-521669"
	I1013 22:03:17.783340  476377 start.go:293] postStartSetup for "embed-certs-521669" (driver="docker")
	I1013 22:03:17.783352  476377 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1013 22:03:17.783422  476377 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1013 22:03:17.783470  476377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-521669
	I1013 22:03:17.803863  476377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/embed-certs-521669/id_rsa Username:docker}
	I1013 22:03:17.907015  476377 ssh_runner.go:195] Run: cat /etc/os-release
	I1013 22:03:17.911487  476377 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1013 22:03:17.911525  476377 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1013 22:03:17.911539  476377 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-226873/.minikube/addons for local assets ...
	I1013 22:03:17.911612  476377 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-226873/.minikube/files for local assets ...
	I1013 22:03:17.911736  476377 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-226873/.minikube/files/etc/ssl/certs/2309292.pem -> 2309292.pem in /etc/ssl/certs
	I1013 22:03:17.911878  476377 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1013 22:03:17.920464  476377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/files/etc/ssl/certs/2309292.pem --> /etc/ssl/certs/2309292.pem (1708 bytes)
	I1013 22:03:17.944107  476377 start.go:296] duration metric: took 160.751032ms for postStartSetup
	I1013 22:03:17.944526  476377 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-521669
	I1013 22:03:17.962986  476377 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/embed-certs-521669/config.json ...
	I1013 22:03:17.963368  476377 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1013 22:03:17.963433  476377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-521669
	I1013 22:03:17.982848  476377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/embed-certs-521669/id_rsa Username:docker}
	I1013 22:03:18.080160  476377 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1013 22:03:18.084919  476377 start.go:128] duration metric: took 6.654639128s to createHost
	I1013 22:03:18.084950  476377 start.go:83] releasing machines lock for "embed-certs-521669", held for 6.65478014s
	I1013 22:03:18.085047  476377 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-521669
	I1013 22:03:18.103381  476377 ssh_runner.go:195] Run: cat /version.json
	I1013 22:03:18.103445  476377 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1013 22:03:18.103454  476377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-521669
	I1013 22:03:18.103538  476377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-521669
	I1013 22:03:18.124826  476377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/embed-certs-521669/id_rsa Username:docker}
	I1013 22:03:18.125175  476377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/embed-certs-521669/id_rsa Username:docker}
	I1013 22:03:18.282543  476377 ssh_runner.go:195] Run: systemctl --version
	I1013 22:03:18.289969  476377 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1013 22:03:18.331007  476377 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1013 22:03:18.336354  476377 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1013 22:03:18.336433  476377 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1013 22:03:18.370656  476377 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1013 22:03:18.370685  476377 start.go:495] detecting cgroup driver to use...
	I1013 22:03:18.370719  476377 detect.go:190] detected "systemd" cgroup driver on host os
	I1013 22:03:18.370790  476377 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1013 22:03:18.390616  476377 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1013 22:03:18.407690  476377 docker.go:218] disabling cri-docker service (if available) ...
	I1013 22:03:18.407749  476377 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1013 22:03:18.429867  476377 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1013 22:03:18.453509  476377 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1013 22:03:18.551968  476377 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1013 22:03:18.655209  476377 docker.go:234] disabling docker service ...
	I1013 22:03:18.655294  476377 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1013 22:03:18.684426  476377 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1013 22:03:18.699901  476377 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1013 22:03:18.806311  476377 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1013 22:03:18.892541  476377 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1013 22:03:18.907217  476377 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1013 22:03:18.924027  476377 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1013 22:03:18.924084  476377 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:03:18.938177  476377 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1013 22:03:18.938264  476377 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:03:18.949869  476377 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:03:18.961316  476377 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:03:18.972845  476377 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1013 22:03:18.991342  476377 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:03:19.002231  476377 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:03:19.023848  476377 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:03:19.043774  476377 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1013 22:03:19.053204  476377 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1013 22:03:19.061638  476377 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:03:19.149544  476377 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1013 22:03:21.338723  476377 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.189138984s)
	I1013 22:03:21.338760  476377 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1013 22:03:21.338817  476377 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1013 22:03:21.343675  476377 start.go:563] Will wait 60s for crictl version
	I1013 22:03:21.343812  476377 ssh_runner.go:195] Run: which crictl
	I1013 22:03:21.348134  476377 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1013 22:03:21.378299  476377 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1013 22:03:21.378394  476377 ssh_runner.go:195] Run: crio --version
	I1013 22:03:21.413031  476377 ssh_runner.go:195] Run: crio --version
	I1013 22:03:21.450173  476377 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1013 22:03:21.451796  476377 cli_runner.go:164] Run: docker network inspect embed-certs-521669 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 22:03:21.472239  476377 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1013 22:03:21.477215  476377 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 22:03:21.489114  476377 kubeadm.go:883] updating cluster {Name:embed-certs-521669 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-521669 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1013 22:03:21.489245  476377 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 22:03:21.489306  476377 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 22:03:21.527713  476377 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 22:03:21.527735  476377 crio.go:433] Images already preloaded, skipping extraction
	I1013 22:03:21.527786  476377 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 22:03:21.558294  476377 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 22:03:21.558320  476377 cache_images.go:85] Images are preloaded, skipping loading
	I1013 22:03:21.558330  476377 kubeadm.go:934] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1013 22:03:21.558445  476377 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-521669 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-521669 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1013 22:03:21.558545  476377 ssh_runner.go:195] Run: crio config
	I1013 22:03:21.609496  476377 cni.go:84] Creating CNI manager for ""
	I1013 22:03:21.609524  476377 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 22:03:21.609547  476377 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1013 22:03:21.609579  476377 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-521669 NodeName:embed-certs-521669 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1013 22:03:21.609761  476377 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-521669"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1013 22:03:21.609832  476377 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1013 22:03:21.618769  476377 binaries.go:44] Found k8s binaries, skipping transfer
	I1013 22:03:21.618857  476377 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1013 22:03:21.629119  476377 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (369 bytes)
	I1013 22:03:21.644312  476377 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1013 22:03:21.663903  476377 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I1013 22:03:21.680429  476377 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1013 22:03:21.684505  476377 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 22:03:21.695900  476377 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:03:21.790892  476377 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 22:03:21.815795  476377 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/embed-certs-521669 for IP: 192.168.103.2
	I1013 22:03:21.815822  476377 certs.go:195] generating shared ca certs ...
	I1013 22:03:21.815840  476377 certs.go:227] acquiring lock for ca certs: {Name:mk5abdb742abbab05bc35d961f97579f54806d56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:03:21.816024  476377 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-226873/.minikube/ca.key
	I1013 22:03:21.816092  476377 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-226873/.minikube/proxy-client-ca.key
	I1013 22:03:21.816108  476377 certs.go:257] generating profile certs ...
	I1013 22:03:21.816175  476377 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/embed-certs-521669/client.key
	I1013 22:03:21.816199  476377 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/embed-certs-521669/client.crt with IP's: []
	I1013 22:03:22.052423  476377 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/embed-certs-521669/client.crt ...
	I1013 22:03:22.052450  476377 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/embed-certs-521669/client.crt: {Name:mkbb345b9e3c6179c3a1a0679dee2b90878ff68f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:03:22.052616  476377 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/embed-certs-521669/client.key ...
	I1013 22:03:22.052626  476377 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/embed-certs-521669/client.key: {Name:mk591cc5b0a51a208e850f5205e0170f11155221 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:03:22.052707  476377 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/embed-certs-521669/apiserver.key.12eccb79
	I1013 22:03:22.052717  476377 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/embed-certs-521669/apiserver.crt.12eccb79 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1013 22:03:22.067243  476377 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/embed-certs-521669/apiserver.crt.12eccb79 ...
	I1013 22:03:22.067274  476377 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/embed-certs-521669/apiserver.crt.12eccb79: {Name:mk0b4fe38a1afbd912ef623383b5de00796c2fcf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:03:22.067447  476377 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/embed-certs-521669/apiserver.key.12eccb79 ...
	I1013 22:03:22.067463  476377 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/embed-certs-521669/apiserver.key.12eccb79: {Name:mkb277f2c351051028070831936aea78f46fa5cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:03:22.067540  476377 certs.go:382] copying /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/embed-certs-521669/apiserver.crt.12eccb79 -> /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/embed-certs-521669/apiserver.crt
	I1013 22:03:22.067626  476377 certs.go:386] copying /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/embed-certs-521669/apiserver.key.12eccb79 -> /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/embed-certs-521669/apiserver.key
	I1013 22:03:22.067686  476377 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/embed-certs-521669/proxy-client.key
	I1013 22:03:22.067702  476377 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/embed-certs-521669/proxy-client.crt with IP's: []
	I1013 22:03:22.346725  476377 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/embed-certs-521669/proxy-client.crt ...
	I1013 22:03:22.346762  476377 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/embed-certs-521669/proxy-client.crt: {Name:mk1f76bf76f25c455937efcd4676a0ac1e68b953 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:03:22.346936  476377 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/embed-certs-521669/proxy-client.key ...
	I1013 22:03:22.346951  476377 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/embed-certs-521669/proxy-client.key: {Name:mk3e8041d1eb09d22e7cd1e2cfa12be080df28b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:03:22.347157  476377 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/230929.pem (1338 bytes)
	W1013 22:03:22.347197  476377 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-226873/.minikube/certs/230929_empty.pem, impossibly tiny 0 bytes
	I1013 22:03:22.347209  476377 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca-key.pem (1675 bytes)
	I1013 22:03:22.347249  476377 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem (1078 bytes)
	I1013 22:03:22.347271  476377 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/cert.pem (1123 bytes)
	I1013 22:03:22.347293  476377 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/key.pem (1679 bytes)
	I1013 22:03:22.347330  476377 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/files/etc/ssl/certs/2309292.pem (1708 bytes)
	I1013 22:03:22.348026  476377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1013 22:03:22.366963  476377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1013 22:03:22.386244  476377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1013 22:03:22.406535  476377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1013 22:03:22.425944  476377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/embed-certs-521669/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1013 22:03:22.444800  476377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/embed-certs-521669/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1013 22:03:22.463641  476377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/embed-certs-521669/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1013 22:03:22.482633  476377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/embed-certs-521669/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1013 22:03:22.502058  476377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/files/etc/ssl/certs/2309292.pem --> /usr/share/ca-certificates/2309292.pem (1708 bytes)
	I1013 22:03:22.522062  476377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1013 22:03:22.541313  476377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/certs/230929.pem --> /usr/share/ca-certificates/230929.pem (1338 bytes)
	I1013 22:03:22.560633  476377 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1013 22:03:22.573837  476377 ssh_runner.go:195] Run: openssl version
	I1013 22:03:22.580595  476377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/230929.pem && ln -fs /usr/share/ca-certificates/230929.pem /etc/ssl/certs/230929.pem"
	I1013 22:03:22.589689  476377 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/230929.pem
	I1013 22:03:22.593692  476377 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 13 21:24 /usr/share/ca-certificates/230929.pem
	I1013 22:03:22.593750  476377 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/230929.pem
	I1013 22:03:22.627884  476377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/230929.pem /etc/ssl/certs/51391683.0"
	I1013 22:03:22.637233  476377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2309292.pem && ln -fs /usr/share/ca-certificates/2309292.pem /etc/ssl/certs/2309292.pem"
	I1013 22:03:22.646382  476377 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2309292.pem
	I1013 22:03:22.650916  476377 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 13 21:24 /usr/share/ca-certificates/2309292.pem
	I1013 22:03:22.651016  476377 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2309292.pem
	I1013 22:03:22.686723  476377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2309292.pem /etc/ssl/certs/3ec20f2e.0"
	I1013 22:03:22.696518  476377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1013 22:03:22.706194  476377 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:03:22.710674  476377 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 13 21:18 /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:03:22.710743  476377 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:03:22.749198  476377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1013 22:03:22.758410  476377 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1013 22:03:22.762370  476377 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1013 22:03:22.762430  476377 kubeadm.go:400] StartCluster: {Name:embed-certs-521669 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-521669 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 22:03:22.762507  476377 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 22:03:22.762579  476377 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 22:03:22.792836  476377 cri.go:89] found id: ""
	I1013 22:03:22.792905  476377 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1013 22:03:22.801525  476377 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1013 22:03:22.809523  476377 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1013 22:03:22.809582  476377 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1013 22:03:22.817337  476377 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1013 22:03:22.817353  476377 kubeadm.go:157] found existing configuration files:
	
	I1013 22:03:22.817403  476377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1013 22:03:22.825291  476377 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1013 22:03:22.825347  476377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1013 22:03:22.833187  476377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1013 22:03:22.840934  476377 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1013 22:03:22.840983  476377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1013 22:03:22.849131  476377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1013 22:03:22.857090  476377 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1013 22:03:22.857148  476377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1013 22:03:22.864854  476377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1013 22:03:22.874181  476377 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1013 22:03:22.874241  476377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1013 22:03:22.882804  476377 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1013 22:03:22.924902  476377 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1013 22:03:22.924967  476377 kubeadm.go:318] [preflight] Running pre-flight checks
	I1013 22:03:22.947708  476377 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1013 22:03:22.947825  476377 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1013 22:03:22.947906  476377 kubeadm.go:318] OS: Linux
	I1013 22:03:22.947983  476377 kubeadm.go:318] CGROUPS_CPU: enabled
	I1013 22:03:22.948057  476377 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1013 22:03:22.948126  476377 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1013 22:03:22.948221  476377 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1013 22:03:22.948273  476377 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1013 22:03:22.948353  476377 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1013 22:03:22.948409  476377 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1013 22:03:22.948502  476377 kubeadm.go:318] CGROUPS_IO: enabled
	I1013 22:03:23.011520  476377 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1013 22:03:23.011660  476377 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1013 22:03:23.011794  476377 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1013 22:03:23.019656  476377 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1013 22:03:21.243680  477441 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21724-226873/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-505851:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -I lz4 -xf /preloaded.tar -C /extractDir: (4.636646298s)
	I1013 22:03:21.243714  477441 kic.go:203] duration metric: took 4.636832025s to extract preloaded images to volume ...
	W1013 22:03:21.243794  477441 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1013 22:03:21.243828  477441 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1013 22:03:21.243864  477441 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1013 22:03:21.308267  477441 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-505851 --name default-k8s-diff-port-505851 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-505851 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-505851 --network default-k8s-diff-port-505851 --ip 192.168.76.2 --volume default-k8s-diff-port-505851:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225
	I1013 22:03:21.606891  477441 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-505851 --format={{.State.Running}}
	I1013 22:03:21.628447  477441 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-505851 --format={{.State.Status}}
	I1013 22:03:21.647849  477441 cli_runner.go:164] Run: docker exec default-k8s-diff-port-505851 stat /var/lib/dpkg/alternatives/iptables
	I1013 22:03:21.699575  477441 oci.go:144] the created container "default-k8s-diff-port-505851" has a running status.
	I1013 22:03:21.699614  477441 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21724-226873/.minikube/machines/default-k8s-diff-port-505851/id_rsa...
	I1013 22:03:21.906662  477441 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21724-226873/.minikube/machines/default-k8s-diff-port-505851/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1013 22:03:21.943935  477441 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-505851 --format={{.State.Status}}
	I1013 22:03:21.967577  477441 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1013 22:03:21.967604  477441 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-505851 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1013 22:03:22.023950  477441 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-505851 --format={{.State.Status}}
	I1013 22:03:22.046271  477441 machine.go:93] provisionDockerMachine start ...
	I1013 22:03:22.046396  477441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-505851
	I1013 22:03:22.065060  477441 main.go:141] libmachine: Using SSH client type: native
	I1013 22:03:22.065390  477441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1013 22:03:22.065408  477441 main.go:141] libmachine: About to run SSH command:
	hostname
	I1013 22:03:22.206150  477441 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-505851
	
	I1013 22:03:22.206185  477441 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-505851"
	I1013 22:03:22.206259  477441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-505851
	I1013 22:03:22.225627  477441 main.go:141] libmachine: Using SSH client type: native
	I1013 22:03:22.225927  477441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1013 22:03:22.225950  477441 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-505851 && echo "default-k8s-diff-port-505851" | sudo tee /etc/hostname
	I1013 22:03:22.379118  477441 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-505851
	
	I1013 22:03:22.379216  477441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-505851
	I1013 22:03:22.399213  477441 main.go:141] libmachine: Using SSH client type: native
	I1013 22:03:22.399441  477441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1013 22:03:22.399467  477441 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-505851' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-505851/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-505851' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1013 22:03:22.538187  477441 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 22:03:22.538222  477441 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21724-226873/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-226873/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-226873/.minikube}
	I1013 22:03:22.538281  477441 ubuntu.go:190] setting up certificates
	I1013 22:03:22.538299  477441 provision.go:84] configureAuth start
	I1013 22:03:22.538373  477441 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-505851
	I1013 22:03:22.558015  477441 provision.go:143] copyHostCerts
	I1013 22:03:22.558079  477441 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-226873/.minikube/ca.pem, removing ...
	I1013 22:03:22.558091  477441 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-226873/.minikube/ca.pem
	I1013 22:03:22.558151  477441 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-226873/.minikube/ca.pem (1078 bytes)
	I1013 22:03:22.558243  477441 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-226873/.minikube/cert.pem, removing ...
	I1013 22:03:22.558251  477441 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-226873/.minikube/cert.pem
	I1013 22:03:22.558277  477441 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-226873/.minikube/cert.pem (1123 bytes)
	I1013 22:03:22.558354  477441 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-226873/.minikube/key.pem, removing ...
	I1013 22:03:22.558366  477441 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-226873/.minikube/key.pem
	I1013 22:03:22.558401  477441 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-226873/.minikube/key.pem (1679 bytes)
	I1013 22:03:22.558507  477441 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-226873/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-505851 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-505851 localhost minikube]
	I1013 22:03:22.863338  477441 provision.go:177] copyRemoteCerts
	I1013 22:03:22.863403  477441 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1013 22:03:22.863462  477441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-505851
	I1013 22:03:22.882550  477441 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/default-k8s-diff-port-505851/id_rsa Username:docker}
	I1013 22:03:22.983590  477441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1013 22:03:23.004815  477441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1013 22:03:23.025253  477441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1013 22:03:23.043559  477441 provision.go:87] duration metric: took 505.243372ms to configureAuth
	I1013 22:03:23.043592  477441 ubuntu.go:206] setting minikube options for container-runtime
	I1013 22:03:23.043750  477441 config.go:182] Loaded profile config "default-k8s-diff-port-505851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:03:23.043851  477441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-505851
	I1013 22:03:23.061730  477441 main.go:141] libmachine: Using SSH client type: native
	I1013 22:03:23.062029  477441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1013 22:03:23.062056  477441 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1013 22:03:23.314467  477441 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1013 22:03:23.314494  477441 machine.go:96] duration metric: took 1.268196643s to provisionDockerMachine
	I1013 22:03:23.314508  477441 client.go:171] duration metric: took 7.362049203s to LocalClient.Create
	I1013 22:03:23.314535  477441 start.go:167] duration metric: took 7.362130092s to libmachine.API.Create "default-k8s-diff-port-505851"
	I1013 22:03:23.314546  477441 start.go:293] postStartSetup for "default-k8s-diff-port-505851" (driver="docker")
	I1013 22:03:23.314561  477441 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1013 22:03:23.314628  477441 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1013 22:03:23.314680  477441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-505851
	I1013 22:03:23.332760  477441 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/default-k8s-diff-port-505851/id_rsa Username:docker}
	I1013 22:03:23.434502  477441 ssh_runner.go:195] Run: cat /etc/os-release
	I1013 22:03:23.438499  477441 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1013 22:03:23.438536  477441 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1013 22:03:23.438552  477441 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-226873/.minikube/addons for local assets ...
	I1013 22:03:23.438617  477441 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-226873/.minikube/files for local assets ...
	I1013 22:03:23.438725  477441 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-226873/.minikube/files/etc/ssl/certs/2309292.pem -> 2309292.pem in /etc/ssl/certs
	I1013 22:03:23.438847  477441 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1013 22:03:23.447664  477441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/files/etc/ssl/certs/2309292.pem --> /etc/ssl/certs/2309292.pem (1708 bytes)
	I1013 22:03:23.471585  477441 start.go:296] duration metric: took 157.023965ms for postStartSetup
	I1013 22:03:23.472033  477441 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-505851
	I1013 22:03:23.492268  477441 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/default-k8s-diff-port-505851/config.json ...
	I1013 22:03:23.492537  477441 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1013 22:03:23.492587  477441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-505851
	I1013 22:03:23.510553  477441 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/default-k8s-diff-port-505851/id_rsa Username:docker}
	I1013 22:03:23.606834  477441 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1013 22:03:23.612228  477441 start.go:128] duration metric: took 7.665608031s to createHost
	I1013 22:03:23.612262  477441 start.go:83] releasing machines lock for "default-k8s-diff-port-505851", held for 7.665780889s
	I1013 22:03:23.612335  477441 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-505851
	I1013 22:03:23.631161  477441 ssh_runner.go:195] Run: cat /version.json
	I1013 22:03:23.631211  477441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-505851
	I1013 22:03:23.631222  477441 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1013 22:03:23.631306  477441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-505851
	I1013 22:03:23.652666  477441 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/default-k8s-diff-port-505851/id_rsa Username:docker}
	I1013 22:03:23.652908  477441 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/default-k8s-diff-port-505851/id_rsa Username:docker}
	I1013 22:03:23.806321  477441 ssh_runner.go:195] Run: systemctl --version
	I1013 22:03:23.813376  477441 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1013 22:03:23.850264  477441 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1013 22:03:23.855282  477441 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1013 22:03:23.855348  477441 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1013 22:03:23.882368  477441 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1013 22:03:23.882397  477441 start.go:495] detecting cgroup driver to use...
	I1013 22:03:23.882432  477441 detect.go:190] detected "systemd" cgroup driver on host os
	I1013 22:03:23.882477  477441 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1013 22:03:23.904893  477441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1013 22:03:23.919589  477441 docker.go:218] disabling cri-docker service (if available) ...
	I1013 22:03:23.919649  477441 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1013 22:03:23.937455  477441 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1013 22:03:23.955182  477441 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1013 22:03:24.041834  477441 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1013 22:03:24.130369  477441 docker.go:234] disabling docker service ...
	I1013 22:03:24.130441  477441 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1013 22:03:24.150665  477441 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1013 22:03:24.163903  477441 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1013 22:03:24.258822  477441 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1013 22:03:24.345840  477441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1013 22:03:24.358981  477441 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1013 22:03:24.373762  477441 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1013 22:03:24.373829  477441 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:03:24.384377  477441 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1013 22:03:24.384454  477441 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:03:24.394114  477441 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:03:24.403356  477441 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:03:24.413541  477441 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1013 22:03:24.422178  477441 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:03:24.431112  477441 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:03:24.444868  477441 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:03:24.454055  477441 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1013 22:03:24.461846  477441 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1013 22:03:24.469362  477441 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:03:24.563338  477441 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1013 22:03:24.673439  477441 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1013 22:03:24.673498  477441 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1013 22:03:24.677858  477441 start.go:563] Will wait 60s for crictl version
	I1013 22:03:24.677922  477441 ssh_runner.go:195] Run: which crictl
	I1013 22:03:24.681662  477441 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1013 22:03:24.708055  477441 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1013 22:03:24.708153  477441 ssh_runner.go:195] Run: crio --version
	I1013 22:03:24.738483  477441 ssh_runner.go:195] Run: crio --version
	I1013 22:03:24.773240  477441 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1013 22:03:24.774790  477441 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-505851 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 22:03:24.792572  477441 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1013 22:03:24.797008  477441 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 22:03:24.807712  477441 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-505851 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-505851 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1013 22:03:24.807869  477441 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 22:03:24.807933  477441 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 22:03:24.842389  477441 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 22:03:24.842412  477441 crio.go:433] Images already preloaded, skipping extraction
	I1013 22:03:24.842471  477441 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 22:03:24.869559  477441 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 22:03:24.869585  477441 cache_images.go:85] Images are preloaded, skipping loading
	I1013 22:03:24.869593  477441 kubeadm.go:934] updating node { 192.168.76.2 8444 v1.34.1 crio true true} ...
	I1013 22:03:24.869699  477441 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-505851 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-505851 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1013 22:03:24.869775  477441 ssh_runner.go:195] Run: crio config
	I1013 22:03:24.919344  477441 cni.go:84] Creating CNI manager for ""
	I1013 22:03:24.919375  477441 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 22:03:24.919397  477441 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1013 22:03:24.919425  477441 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-505851 NodeName:default-k8s-diff-port-505851 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1013 22:03:24.919579  477441 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-505851"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1013 22:03:24.919653  477441 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1013 22:03:24.929378  477441 binaries.go:44] Found k8s binaries, skipping transfer
	I1013 22:03:24.929453  477441 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1013 22:03:24.937831  477441 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1013 22:03:24.952469  477441 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1013 22:03:24.970115  477441 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1013 22:03:24.984729  477441 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1013 22:03:24.988771  477441 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 22:03:24.999302  477441 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:03:25.080214  477441 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 22:03:25.105985  477441 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/default-k8s-diff-port-505851 for IP: 192.168.76.2
	I1013 22:03:25.106029  477441 certs.go:195] generating shared ca certs ...
	I1013 22:03:25.106052  477441 certs.go:227] acquiring lock for ca certs: {Name:mk5abdb742abbab05bc35d961f97579f54806d56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:03:25.106216  477441 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-226873/.minikube/ca.key
	I1013 22:03:25.106272  477441 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-226873/.minikube/proxy-client-ca.key
	I1013 22:03:25.106284  477441 certs.go:257] generating profile certs ...
	I1013 22:03:25.106359  477441 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/default-k8s-diff-port-505851/client.key
	I1013 22:03:25.106388  477441 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/default-k8s-diff-port-505851/client.crt with IP's: []
	I1013 22:03:25.419846  477441 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/default-k8s-diff-port-505851/client.crt ...
	I1013 22:03:25.419885  477441 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/default-k8s-diff-port-505851/client.crt: {Name:mk728e00aa172d5cca8ad66682bc4e98e7a15542 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:03:25.420119  477441 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/default-k8s-diff-port-505851/client.key ...
	I1013 22:03:25.420139  477441 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/default-k8s-diff-port-505851/client.key: {Name:mk319ddb7ff837a49040402151969c7b02d6de6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:03:25.420271  477441 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/default-k8s-diff-port-505851/apiserver.key.f604c011
	I1013 22:03:25.420290  477441 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/default-k8s-diff-port-505851/apiserver.crt.f604c011 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1013 22:03:25.711316  477441 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/default-k8s-diff-port-505851/apiserver.crt.f604c011 ...
	I1013 22:03:25.711345  477441 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/default-k8s-diff-port-505851/apiserver.crt.f604c011: {Name:mk8c81c5a3b955e4d57458a05a01e6351ea6334a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:03:25.711548  477441 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/default-k8s-diff-port-505851/apiserver.key.f604c011 ...
	I1013 22:03:25.711575  477441 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/default-k8s-diff-port-505851/apiserver.key.f604c011: {Name:mk888ff623dbb01a1319c71bbe1b19b0e7c04b39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:03:25.711704  477441 certs.go:382] copying /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/default-k8s-diff-port-505851/apiserver.crt.f604c011 -> /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/default-k8s-diff-port-505851/apiserver.crt
	I1013 22:03:25.711829  477441 certs.go:386] copying /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/default-k8s-diff-port-505851/apiserver.key.f604c011 -> /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/default-k8s-diff-port-505851/apiserver.key
	I1013 22:03:25.711899  477441 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/default-k8s-diff-port-505851/proxy-client.key
	I1013 22:03:25.711917  477441 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/default-k8s-diff-port-505851/proxy-client.crt with IP's: []
	I1013 22:03:23.021691  476377 out.go:252]   - Generating certificates and keys ...
	I1013 22:03:23.021806  476377 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1013 22:03:23.021888  476377 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1013 22:03:23.128199  476377 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1013 22:03:23.470210  476377 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1013 22:03:23.535933  476377 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1013 22:03:23.753947  476377 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1013 22:03:24.050956  476377 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1013 22:03:24.051129  476377 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [embed-certs-521669 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1013 22:03:24.294828  476377 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1013 22:03:24.295469  476377 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-521669 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1013 22:03:25.075632  476377 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1013 22:03:25.674527  476377 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1013 22:03:25.806145  476377 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1013 22:03:25.806239  476377 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1013 22:03:26.002440  476377 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1013 22:03:26.537264  476377 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1013 22:03:26.939341  476377 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1013 22:03:27.361431  476377 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1013 22:03:27.451189  476377 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1013 22:03:27.452773  476377 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1013 22:03:27.457811  476377 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1013 22:03:25.850356  477441 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/default-k8s-diff-port-505851/proxy-client.crt ...
	I1013 22:03:25.850390  477441 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/default-k8s-diff-port-505851/proxy-client.crt: {Name:mk1ed1ee8ae08b5e560918e0c409cb75a0b6ac25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:03:25.850569  477441 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/default-k8s-diff-port-505851/proxy-client.key ...
	I1013 22:03:25.850584  477441 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/default-k8s-diff-port-505851/proxy-client.key: {Name:mkbc909515a9fca03e924b52ead92cf32f804368 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:03:25.850773  477441 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/230929.pem (1338 bytes)
	W1013 22:03:25.850829  477441 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-226873/.minikube/certs/230929_empty.pem, impossibly tiny 0 bytes
	I1013 22:03:25.850841  477441 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca-key.pem (1675 bytes)
	I1013 22:03:25.850867  477441 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem (1078 bytes)
	I1013 22:03:25.850886  477441 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/cert.pem (1123 bytes)
	I1013 22:03:25.850904  477441 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/key.pem (1679 bytes)
	I1013 22:03:25.850946  477441 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/files/etc/ssl/certs/2309292.pem (1708 bytes)
	I1013 22:03:25.851545  477441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1013 22:03:25.870750  477441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1013 22:03:25.888829  477441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1013 22:03:25.906847  477441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1013 22:03:25.925047  477441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/default-k8s-diff-port-505851/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1013 22:03:25.944035  477441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/default-k8s-diff-port-505851/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1013 22:03:25.963350  477441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/default-k8s-diff-port-505851/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1013 22:03:25.984363  477441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/default-k8s-diff-port-505851/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1013 22:03:26.003013  477441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1013 22:03:26.022970  477441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/certs/230929.pem --> /usr/share/ca-certificates/230929.pem (1338 bytes)
	I1013 22:03:26.041790  477441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/files/etc/ssl/certs/2309292.pem --> /usr/share/ca-certificates/2309292.pem (1708 bytes)
	I1013 22:03:26.060267  477441 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1013 22:03:26.073583  477441 ssh_runner.go:195] Run: openssl version
	I1013 22:03:26.080240  477441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1013 22:03:26.089182  477441 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:03:26.093229  477441 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 13 21:18 /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:03:26.093289  477441 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:03:26.130236  477441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1013 22:03:26.139667  477441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/230929.pem && ln -fs /usr/share/ca-certificates/230929.pem /etc/ssl/certs/230929.pem"
	I1013 22:03:26.148652  477441 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/230929.pem
	I1013 22:03:26.152721  477441 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 13 21:24 /usr/share/ca-certificates/230929.pem
	I1013 22:03:26.152790  477441 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/230929.pem
	I1013 22:03:26.187015  477441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/230929.pem /etc/ssl/certs/51391683.0"
	I1013 22:03:26.196692  477441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2309292.pem && ln -fs /usr/share/ca-certificates/2309292.pem /etc/ssl/certs/2309292.pem"
	I1013 22:03:26.205899  477441 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2309292.pem
	I1013 22:03:26.209838  477441 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 13 21:24 /usr/share/ca-certificates/2309292.pem
	I1013 22:03:26.209907  477441 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2309292.pem
	I1013 22:03:26.244864  477441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2309292.pem /etc/ssl/certs/3ec20f2e.0"
	I1013 22:03:26.254442  477441 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1013 22:03:26.258129  477441 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1013 22:03:26.258182  477441 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-505851 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-505851 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 22:03:26.258267  477441 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 22:03:26.258329  477441 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 22:03:26.287967  477441 cri.go:89] found id: ""
	I1013 22:03:26.288063  477441 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1013 22:03:26.297038  477441 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1013 22:03:26.305359  477441 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1013 22:03:26.305426  477441 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1013 22:03:26.313595  477441 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1013 22:03:26.313614  477441 kubeadm.go:157] found existing configuration files:
	
	I1013 22:03:26.313662  477441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1013 22:03:26.321467  477441 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1013 22:03:26.321518  477441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1013 22:03:26.329292  477441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1013 22:03:26.337378  477441 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1013 22:03:26.337432  477441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1013 22:03:26.346111  477441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1013 22:03:26.354602  477441 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1013 22:03:26.354666  477441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1013 22:03:26.362665  477441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1013 22:03:26.370765  477441 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1013 22:03:26.370839  477441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1013 22:03:26.378542  477441 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1013 22:03:26.442708  477441 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1013 22:03:26.510211  477441 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1013 22:03:27.459306  476377 out.go:252]   - Booting up control plane ...
	I1013 22:03:27.459431  476377 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1013 22:03:27.459518  476377 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1013 22:03:27.460240  476377 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1013 22:03:27.476611  476377 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1013 22:03:27.476757  476377 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1013 22:03:27.484756  476377 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1013 22:03:27.484893  476377 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1013 22:03:27.485012  476377 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1013 22:03:27.585540  476377 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1013 22:03:27.585678  476377 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1013 22:03:28.586437  476377 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.000949022s
	I1013 22:03:28.590746  476377 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1013 22:03:28.590881  476377 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1013 22:03:28.591027  476377 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1013 22:03:28.591108  476377 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1013 22:03:29.596702  476377 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.005885526s
	I1013 22:03:30.847000  476377 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.256279312s
	I1013 22:03:32.092893  476377 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 3.50208585s
	I1013 22:03:32.106657  476377 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1013 22:03:32.120734  476377 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1013 22:03:32.133072  476377 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1013 22:03:32.133366  476377 kubeadm.go:318] [mark-control-plane] Marking the node embed-certs-521669 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1013 22:03:32.143828  476377 kubeadm.go:318] [bootstrap-token] Using token: iu6qpi.vhxdg8i706f1jc7o
	
	
	==> CRI-O <==
	Oct 13 22:02:52 no-preload-080337 crio[561]: time="2025-10-13T22:02:52.678294559Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 13 22:02:52 no-preload-080337 crio[561]: time="2025-10-13T22:02:52.681803949Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 13 22:02:52 no-preload-080337 crio[561]: time="2025-10-13T22:02:52.681830524Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 13 22:03:06 no-preload-080337 crio[561]: time="2025-10-13T22:03:06.913637014Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=944aaed8-6d5c-44b5-8b9a-608b814dec21 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:03:06 no-preload-080337 crio[561]: time="2025-10-13T22:03:06.916515915Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=fd79e0e1-c693-4b6a-87ee-9473bb630f90 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:03:06 no-preload-080337 crio[561]: time="2025-10-13T22:03:06.91938671Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q2g2s/dashboard-metrics-scraper" id=2ff58134-6fe7-470a-9f9d-325dcaa5563d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:03:06 no-preload-080337 crio[561]: time="2025-10-13T22:03:06.921565134Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:03:06 no-preload-080337 crio[561]: time="2025-10-13T22:03:06.927776443Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:03:06 no-preload-080337 crio[561]: time="2025-10-13T22:03:06.928336814Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:03:06 no-preload-080337 crio[561]: time="2025-10-13T22:03:06.961225382Z" level=info msg="Created container f7a7540b72189df38075c56febc2382f76a3f78677b19a8e85ae274d5d30b6ef: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q2g2s/dashboard-metrics-scraper" id=2ff58134-6fe7-470a-9f9d-325dcaa5563d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:03:06 no-preload-080337 crio[561]: time="2025-10-13T22:03:06.96192305Z" level=info msg="Starting container: f7a7540b72189df38075c56febc2382f76a3f78677b19a8e85ae274d5d30b6ef" id=45c697ef-dd39-4661-8308-4c69c2242ed5 name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 22:03:06 no-preload-080337 crio[561]: time="2025-10-13T22:03:06.964042994Z" level=info msg="Started container" PID=1755 containerID=f7a7540b72189df38075c56febc2382f76a3f78677b19a8e85ae274d5d30b6ef description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q2g2s/dashboard-metrics-scraper id=45c697ef-dd39-4661-8308-4c69c2242ed5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=38c676dc0edb3823e7b9790dfc6b4a2e25f729f5df8cd26bd2cb8b5e68c936f3
	Oct 13 22:03:07 no-preload-080337 crio[561]: time="2025-10-13T22:03:07.010640903Z" level=info msg="Removing container: 92d533ba7a51e6d43482acb5451c0b339d11c086bfdfdc9f7dbfcbefb4f5002a" id=1527f66b-3018-4b55-86e3-aeb65236effa name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 13 22:03:07 no-preload-080337 crio[561]: time="2025-10-13T22:03:07.022261741Z" level=info msg="Removed container 92d533ba7a51e6d43482acb5451c0b339d11c086bfdfdc9f7dbfcbefb4f5002a: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q2g2s/dashboard-metrics-scraper" id=1527f66b-3018-4b55-86e3-aeb65236effa name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 13 22:03:13 no-preload-080337 crio[561]: time="2025-10-13T22:03:13.030507805Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=1626cc39-59ab-4e6b-82a6-0560a420ae17 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:03:13 no-preload-080337 crio[561]: time="2025-10-13T22:03:13.031571331Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=af0cb6c0-28df-48b8-9149-2e14272b1319 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:03:13 no-preload-080337 crio[561]: time="2025-10-13T22:03:13.032665738Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=b4588445-2713-467e-a640-b17e34aec21e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:03:13 no-preload-080337 crio[561]: time="2025-10-13T22:03:13.032946808Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:03:13 no-preload-080337 crio[561]: time="2025-10-13T22:03:13.040661874Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:03:13 no-preload-080337 crio[561]: time="2025-10-13T22:03:13.040900002Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/af77e7e8553fbbca3404061d81d481a625950d22700101f2d2d5524927a4cf66/merged/etc/passwd: no such file or directory"
	Oct 13 22:03:13 no-preload-080337 crio[561]: time="2025-10-13T22:03:13.04093104Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/af77e7e8553fbbca3404061d81d481a625950d22700101f2d2d5524927a4cf66/merged/etc/group: no such file or directory"
	Oct 13 22:03:13 no-preload-080337 crio[561]: time="2025-10-13T22:03:13.04129473Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:03:13 no-preload-080337 crio[561]: time="2025-10-13T22:03:13.0685341Z" level=info msg="Created container a2800e4594ddbdd381e3a3e55fb92350f657478bba273f9ed6e919eaf04046e4: kube-system/storage-provisioner/storage-provisioner" id=b4588445-2713-467e-a640-b17e34aec21e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:03:13 no-preload-080337 crio[561]: time="2025-10-13T22:03:13.069234087Z" level=info msg="Starting container: a2800e4594ddbdd381e3a3e55fb92350f657478bba273f9ed6e919eaf04046e4" id=946a128b-acb7-458c-87dd-62b0b7ba241a name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 22:03:13 no-preload-080337 crio[561]: time="2025-10-13T22:03:13.071379583Z" level=info msg="Started container" PID=1769 containerID=a2800e4594ddbdd381e3a3e55fb92350f657478bba273f9ed6e919eaf04046e4 description=kube-system/storage-provisioner/storage-provisioner id=946a128b-acb7-458c-87dd-62b0b7ba241a name=/runtime.v1.RuntimeService/StartContainer sandboxID=b157b262815590a2c71c6209da58bbf7a774a03d3441428685132ea518fb87e1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	a2800e4594ddb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           20 seconds ago      Running             storage-provisioner         1                   b157b26281559       storage-provisioner                          kube-system
	f7a7540b72189       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           26 seconds ago      Exited              dashboard-metrics-scraper   2                   38c676dc0edb3       dashboard-metrics-scraper-6ffb444bf9-q2g2s   kubernetes-dashboard
	ff734f532ee90       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   41 seconds ago      Running             kubernetes-dashboard        0                   c84c828594c04       kubernetes-dashboard-855c9754f9-mkvmc        kubernetes-dashboard
	0a3d791b517ff       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           50 seconds ago      Running             coredns                     0                   30f2676492f91       coredns-66bc5c9577-n6t7s                     kube-system
	c11d7ea10ff07       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           50 seconds ago      Running             kindnet-cni                 0                   26931721ce632       kindnet-74766                                kube-system
	ffff1cd868444       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           50 seconds ago      Running             busybox                     1                   c65145bbe6d8e       busybox                                      default
	ca17462b8cc0e       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           50 seconds ago      Running             kube-proxy                  0                   5ece505d36e59       kube-proxy-2scrx                             kube-system
	171aa5a37278a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           50 seconds ago      Exited              storage-provisioner         0                   b157b26281559       storage-provisioner                          kube-system
	148f0bcacf55a       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           53 seconds ago      Running             etcd                        0                   3bd670e335491       etcd-no-preload-080337                       kube-system
	db978d7166395       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           53 seconds ago      Running             kube-apiserver              0                   76a4f9e5e9eb8       kube-apiserver-no-preload-080337             kube-system
	3f85644ea5a0b       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           53 seconds ago      Running             kube-controller-manager     0                   7ea6dbd197034       kube-controller-manager-no-preload-080337    kube-system
	09313475387f6       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           53 seconds ago      Running             kube-scheduler              0                   abf4dddba602b       kube-scheduler-no-preload-080337             kube-system
	
	
	==> coredns [0a3d791b517ffdd9da09560885e05b173435fc2617cdb09b7a07530db6434db5] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:44289 - 194 "HINFO IN 8769929789709925291.6681308039238444373. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.065629172s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-080337
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-080337
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=18273cc699fc357fbf4f93654efe4966698a9f22
	                    minikube.k8s.io/name=no-preload-080337
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_13T22_01_43_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 13 Oct 2025 22:01:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-080337
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 13 Oct 2025 22:03:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 13 Oct 2025 22:03:12 +0000   Mon, 13 Oct 2025 22:01:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 13 Oct 2025 22:03:12 +0000   Mon, 13 Oct 2025 22:01:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 13 Oct 2025 22:03:12 +0000   Mon, 13 Oct 2025 22:01:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 13 Oct 2025 22:03:12 +0000   Mon, 13 Oct 2025 22:02:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    no-preload-080337
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863444Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863444Ki
	  pods:               110
	System Info:
	  Machine ID:                 66f4c9907daa2bafbf78246f68ed04ba
	  System UUID:                b626e944-ef41-4bbd-9e16-cce1552f60c7
	  Boot ID:                    981b04c4-08c1-4321-af44-0fa889f5f1d8
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 coredns-66bc5c9577-n6t7s                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     105s
	  kube-system                 etcd-no-preload-080337                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         111s
	  kube-system                 kindnet-74766                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      105s
	  kube-system                 kube-apiserver-no-preload-080337              250m (3%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-controller-manager-no-preload-080337     200m (2%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-proxy-2scrx                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 kube-scheduler-no-preload-080337              100m (1%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-q2g2s    0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-mkvmc         0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 104s               kube-proxy       
	  Normal  Starting                 50s                kube-proxy       
	  Normal  NodeHasSufficientMemory  111s               kubelet          Node no-preload-080337 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    111s               kubelet          Node no-preload-080337 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     111s               kubelet          Node no-preload-080337 status is now: NodeHasSufficientPID
	  Normal  Starting                 111s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           106s               node-controller  Node no-preload-080337 event: Registered Node no-preload-080337 in Controller
	  Normal  NodeReady                92s                kubelet          Node no-preload-080337 status is now: NodeReady
	  Normal  Starting                 55s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  54s (x8 over 55s)  kubelet          Node no-preload-080337 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    54s (x8 over 55s)  kubelet          Node no-preload-080337 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     54s (x8 over 55s)  kubelet          Node no-preload-080337 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           49s                node-controller  Node no-preload-080337 event: Registered Node no-preload-080337 in Controller
	
	
	==> dmesg <==
	[  +0.099627] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.027042] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.612816] kauditd_printk_skb: 47 callbacks suppressed
	[Oct13 21:21] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +1.026013] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +1.023870] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +1.023931] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +1.023844] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +1.023883] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +2.047734] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +4.031606] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +8.511203] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[ +16.382178] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[Oct13 21:22] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	
	
	==> etcd [148f0bcacf55a43101a10f115e851d44747ab0b0f8fa14a67c8e9715dc66844d] <==
	{"level":"warn","ts":"2025-10-13T22:02:40.691118Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:02:40.698026Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:02:40.705324Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:02:40.712691Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:02:40.719815Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:02:40.726461Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:02:40.732521Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:02:40.746261Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:02:40.759320Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:02:40.766123Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:02:40.772426Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:02:40.778615Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:02:40.786138Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:02:40.793377Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:02:40.800317Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:02:40.806468Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:02:40.812647Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:02:40.818845Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:02:40.830186Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:02:40.834967Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:02:40.845084Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:02:40.851271Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52898","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-13T22:03:14.275624Z","caller":"traceutil/trace.go:172","msg":"trace[1620239583] transaction","detail":"{read_only:false; response_revision:622; number_of_response:1; }","duration":"234.058297ms","start":"2025-10-13T22:03:14.041544Z","end":"2025-10-13T22:03:14.275602Z","steps":["trace[1620239583] 'process raft request'  (duration: 233.892225ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-13T22:03:14.562435Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"155.883308ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-n6t7s\" limit:1 ","response":"range_response_count:1 size:5933"}
	{"level":"info","ts":"2025-10-13T22:03:14.562516Z","caller":"traceutil/trace.go:172","msg":"trace[1526227819] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-n6t7s; range_end:; response_count:1; response_revision:622; }","duration":"156.008322ms","start":"2025-10-13T22:03:14.406491Z","end":"2025-10-13T22:03:14.562500Z","steps":["trace[1526227819] 'range keys from in-memory index tree'  (duration: 155.711594ms)"],"step_count":1}
	
	
	==> kernel <==
	 22:03:33 up  1:46,  0 user,  load average: 3.73, 3.37, 5.86
	Linux no-preload-080337 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c11d7ea10ff07c5ab8ae8feca92e0b0aa357520977cf80360fa01049e5b32b5f] <==
	I1013 22:02:42.456548       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1013 22:02:42.550101       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1013 22:02:42.550273       1 main.go:148] setting mtu 1500 for CNI 
	I1013 22:02:42.550290       1 main.go:178] kindnetd IP family: "ipv4"
	I1013 22:02:42.550315       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-13T22:02:42Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1013 22:02:42.659082       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1013 22:02:42.750157       1 controller.go:381] "Waiting for informer caches to sync"
	I1013 22:02:42.750274       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1013 22:02:42.750693       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1013 22:02:42.955346       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1013 22:02:42.955390       1 metrics.go:72] Registering metrics
	I1013 22:02:42.956361       1 controller.go:711] "Syncing nftables rules"
	I1013 22:02:52.659115       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1013 22:02:52.659172       1 main.go:301] handling current node
	I1013 22:03:02.667057       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1013 22:03:02.667097       1 main.go:301] handling current node
	I1013 22:03:12.659061       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1013 22:03:12.659093       1 main.go:301] handling current node
	I1013 22:03:22.664070       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1013 22:03:22.664115       1 main.go:301] handling current node
	I1013 22:03:32.668081       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1013 22:03:32.668115       1 main.go:301] handling current node
	
	
	==> kube-apiserver [db978d7166395383320a2b2c9c28bf365b3b1253da4d608cc691cb890c27b32f] <==
	I1013 22:02:41.382955       1 autoregister_controller.go:144] Starting autoregister controller
	I1013 22:02:41.382963       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1013 22:02:41.382971       1 cache.go:39] Caches are synced for autoregister controller
	I1013 22:02:41.381173       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1013 22:02:41.380964       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1013 22:02:41.380983       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1013 22:02:41.381128       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1013 22:02:41.383447       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1013 22:02:41.388042       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1013 22:02:41.410905       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1013 22:02:41.411186       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1013 22:02:41.422902       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1013 22:02:41.422935       1 policy_source.go:240] refreshing policies
	I1013 22:02:41.463814       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1013 22:02:41.681166       1 controller.go:667] quota admission added evaluator for: namespaces
	I1013 22:02:41.708476       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1013 22:02:41.726785       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1013 22:02:41.737523       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1013 22:02:41.743479       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1013 22:02:41.775635       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.241.189"}
	I1013 22:02:41.785157       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.97.202.77"}
	I1013 22:02:42.286131       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1013 22:02:45.024493       1 controller.go:667] quota admission added evaluator for: endpoints
	I1013 22:02:45.174302       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1013 22:02:45.272828       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [3f85644ea5a0b267c7fc78009aa5bfd8d8247edbf9e2e04243d0da00d40977e5] <==
	I1013 22:02:44.702052       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1013 22:02:44.704356       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1013 22:02:44.706892       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1013 22:02:44.708215       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1013 22:02:44.711363       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1013 22:02:44.720373       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1013 22:02:44.720421       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1013 22:02:44.720453       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1013 22:02:44.720473       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1013 22:02:44.720498       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1013 22:02:44.720543       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1013 22:02:44.720759       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1013 22:02:44.720899       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 22:02:44.720916       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1013 22:02:44.720921       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1013 22:02:44.721070       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1013 22:02:44.721103       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1013 22:02:44.721692       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1013 22:02:44.721724       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1013 22:02:44.722922       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1013 22:02:44.722950       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1013 22:02:44.724106       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1013 22:02:44.726358       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1013 22:02:44.741458       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1013 22:02:44.744776       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [ca17462b8cc0e8271f720f326aced92a21cf66c7a613241186fd9386088f8ac4] <==
	I1013 22:02:42.313419       1 server_linux.go:53] "Using iptables proxy"
	I1013 22:02:42.369257       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1013 22:02:42.469690       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1013 22:02:42.469735       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1013 22:02:42.469844       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1013 22:02:42.491377       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1013 22:02:42.491434       1 server_linux.go:132] "Using iptables Proxier"
	I1013 22:02:42.496786       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1013 22:02:42.497272       1 server.go:527] "Version info" version="v1.34.1"
	I1013 22:02:42.497304       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 22:02:42.498569       1 config.go:200] "Starting service config controller"
	I1013 22:02:42.498597       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1013 22:02:42.498683       1 config.go:309] "Starting node config controller"
	I1013 22:02:42.498690       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1013 22:02:42.498835       1 config.go:106] "Starting endpoint slice config controller"
	I1013 22:02:42.498850       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1013 22:02:42.499264       1 config.go:403] "Starting serviceCIDR config controller"
	I1013 22:02:42.499460       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1013 22:02:42.599169       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1013 22:02:42.599206       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1013 22:02:42.599206       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1013 22:02:42.599741       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [09313475387f6d9193c4369e317fc1d49a163fc8159f82148fea73cd3e610424] <==
	I1013 22:02:39.920828       1 serving.go:386] Generated self-signed cert in-memory
	I1013 22:02:41.390443       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1013 22:02:41.390476       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 22:02:41.396805       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1013 22:02:41.396839       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1013 22:02:41.396922       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 22:02:41.396923       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1013 22:02:41.396944       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 22:02:41.396953       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1013 22:02:41.397506       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1013 22:02:41.397594       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1013 22:02:41.497459       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1013 22:02:41.497464       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 22:02:41.497473       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	
	
	==> kubelet <==
	Oct 13 22:02:45 no-preload-080337 kubelet[710]: I1013 22:02:45.344056     710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwhbh\" (UniqueName: \"kubernetes.io/projected/8b62cb5c-c068-444e-a216-87c6c73d107b-kube-api-access-vwhbh\") pod \"kubernetes-dashboard-855c9754f9-mkvmc\" (UID: \"8b62cb5c-c068-444e-a216-87c6c73d107b\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-mkvmc"
	Oct 13 22:02:45 no-preload-080337 kubelet[710]: I1013 22:02:45.344153     710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/8b62cb5c-c068-444e-a216-87c6c73d107b-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-mkvmc\" (UID: \"8b62cb5c-c068-444e-a216-87c6c73d107b\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-mkvmc"
	Oct 13 22:02:46 no-preload-080337 kubelet[710]: I1013 22:02:46.501048     710 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 13 22:02:47 no-preload-080337 kubelet[710]: I1013 22:02:47.955749     710 scope.go:117] "RemoveContainer" containerID="3eddbe430db5fa262b81161bb8d5b10238dd1e0dacfdab840055d5c0a3f08255"
	Oct 13 22:02:48 no-preload-080337 kubelet[710]: I1013 22:02:48.960388     710 scope.go:117] "RemoveContainer" containerID="3eddbe430db5fa262b81161bb8d5b10238dd1e0dacfdab840055d5c0a3f08255"
	Oct 13 22:02:48 no-preload-080337 kubelet[710]: I1013 22:02:48.960592     710 scope.go:117] "RemoveContainer" containerID="92d533ba7a51e6d43482acb5451c0b339d11c086bfdfdc9f7dbfcbefb4f5002a"
	Oct 13 22:02:48 no-preload-080337 kubelet[710]: E1013 22:02:48.960791     710 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-q2g2s_kubernetes-dashboard(69d7efac-3f98-4e70-9521-1a59cbf3ce29)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q2g2s" podUID="69d7efac-3f98-4e70-9521-1a59cbf3ce29"
	Oct 13 22:02:49 no-preload-080337 kubelet[710]: I1013 22:02:49.964530     710 scope.go:117] "RemoveContainer" containerID="92d533ba7a51e6d43482acb5451c0b339d11c086bfdfdc9f7dbfcbefb4f5002a"
	Oct 13 22:02:49 no-preload-080337 kubelet[710]: E1013 22:02:49.964724     710 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-q2g2s_kubernetes-dashboard(69d7efac-3f98-4e70-9521-1a59cbf3ce29)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q2g2s" podUID="69d7efac-3f98-4e70-9521-1a59cbf3ce29"
	Oct 13 22:02:51 no-preload-080337 kubelet[710]: I1013 22:02:51.980602     710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-mkvmc" podStartSLOduration=1.32029568 podStartE2EDuration="6.980579985s" podCreationTimestamp="2025-10-13 22:02:45 +0000 UTC" firstStartedPulling="2025-10-13 22:02:45.57297582 +0000 UTC m=+6.747195581" lastFinishedPulling="2025-10-13 22:02:51.233260136 +0000 UTC m=+12.407479886" observedRunningTime="2025-10-13 22:02:51.98035587 +0000 UTC m=+13.154575639" watchObservedRunningTime="2025-10-13 22:02:51.980579985 +0000 UTC m=+13.154799754"
	Oct 13 22:02:53 no-preload-080337 kubelet[710]: I1013 22:02:53.101101     710 scope.go:117] "RemoveContainer" containerID="92d533ba7a51e6d43482acb5451c0b339d11c086bfdfdc9f7dbfcbefb4f5002a"
	Oct 13 22:02:53 no-preload-080337 kubelet[710]: E1013 22:02:53.101284     710 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-q2g2s_kubernetes-dashboard(69d7efac-3f98-4e70-9521-1a59cbf3ce29)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q2g2s" podUID="69d7efac-3f98-4e70-9521-1a59cbf3ce29"
	Oct 13 22:03:06 no-preload-080337 kubelet[710]: I1013 22:03:06.913158     710 scope.go:117] "RemoveContainer" containerID="92d533ba7a51e6d43482acb5451c0b339d11c086bfdfdc9f7dbfcbefb4f5002a"
	Oct 13 22:03:07 no-preload-080337 kubelet[710]: I1013 22:03:07.009363     710 scope.go:117] "RemoveContainer" containerID="92d533ba7a51e6d43482acb5451c0b339d11c086bfdfdc9f7dbfcbefb4f5002a"
	Oct 13 22:03:07 no-preload-080337 kubelet[710]: I1013 22:03:07.009637     710 scope.go:117] "RemoveContainer" containerID="f7a7540b72189df38075c56febc2382f76a3f78677b19a8e85ae274d5d30b6ef"
	Oct 13 22:03:07 no-preload-080337 kubelet[710]: E1013 22:03:07.010062     710 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-q2g2s_kubernetes-dashboard(69d7efac-3f98-4e70-9521-1a59cbf3ce29)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q2g2s" podUID="69d7efac-3f98-4e70-9521-1a59cbf3ce29"
	Oct 13 22:03:13 no-preload-080337 kubelet[710]: I1013 22:03:13.030144     710 scope.go:117] "RemoveContainer" containerID="171aa5a37278a899b44963bc44d42ebd79c2ac51b6a51f575a8e1e30845ec531"
	Oct 13 22:03:13 no-preload-080337 kubelet[710]: I1013 22:03:13.102201     710 scope.go:117] "RemoveContainer" containerID="f7a7540b72189df38075c56febc2382f76a3f78677b19a8e85ae274d5d30b6ef"
	Oct 13 22:03:13 no-preload-080337 kubelet[710]: E1013 22:03:13.102419     710 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-q2g2s_kubernetes-dashboard(69d7efac-3f98-4e70-9521-1a59cbf3ce29)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q2g2s" podUID="69d7efac-3f98-4e70-9521-1a59cbf3ce29"
	Oct 13 22:03:23 no-preload-080337 kubelet[710]: I1013 22:03:23.912104     710 scope.go:117] "RemoveContainer" containerID="f7a7540b72189df38075c56febc2382f76a3f78677b19a8e85ae274d5d30b6ef"
	Oct 13 22:03:23 no-preload-080337 kubelet[710]: E1013 22:03:23.912325     710 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-q2g2s_kubernetes-dashboard(69d7efac-3f98-4e70-9521-1a59cbf3ce29)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q2g2s" podUID="69d7efac-3f98-4e70-9521-1a59cbf3ce29"
	Oct 13 22:03:30 no-preload-080337 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 13 22:03:30 no-preload-080337 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 13 22:03:30 no-preload-080337 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 13 22:03:30 no-preload-080337 systemd[1]: kubelet.service: Consumed 1.663s CPU time.
	
	
	==> kubernetes-dashboard [ff734f532ee90c978ae4ce5cfb25e9648dbfe2eedcb5f833476bc6ebc32b57e8] <==
	2025/10/13 22:02:51 Using namespace: kubernetes-dashboard
	2025/10/13 22:02:51 Using in-cluster config to connect to apiserver
	2025/10/13 22:02:51 Using secret token for csrf signing
	2025/10/13 22:02:51 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/13 22:02:51 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/13 22:02:51 Successful initial request to the apiserver, version: v1.34.1
	2025/10/13 22:02:51 Generating JWE encryption key
	2025/10/13 22:02:51 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/13 22:02:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/13 22:02:51 Initializing JWE encryption key from synchronized object
	2025/10/13 22:02:51 Creating in-cluster Sidecar client
	2025/10/13 22:02:51 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/13 22:02:51 Serving insecurely on HTTP port: 9090
	2025/10/13 22:03:21 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/13 22:02:51 Starting overwatch
	
	
	==> storage-provisioner [171aa5a37278a899b44963bc44d42ebd79c2ac51b6a51f575a8e1e30845ec531] <==
	I1013 22:02:42.276038       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1013 22:03:12.278428       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [a2800e4594ddbdd381e3a3e55fb92350f657478bba273f9ed6e919eaf04046e4] <==
	I1013 22:03:13.085165       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1013 22:03:13.093750       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1013 22:03:13.093815       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1013 22:03:13.096065       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:03:16.551082       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:03:20.811932       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:03:24.410527       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:03:27.465424       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:03:30.489285       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:03:30.495424       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1013 22:03:30.495588       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1013 22:03:30.496231       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-080337_148b1ced-0af5-4ac8-b206-358d4e269ffa!
	I1013 22:03:30.496705       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f7f52034-0e22-43b7-ac83-32c79d19cae9", APIVersion:"v1", ResourceVersion:"632", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-080337_148b1ced-0af5-4ac8-b206-358d4e269ffa became leader
	W1013 22:03:30.499097       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:03:30.504638       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1013 22:03:30.596550       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-080337_148b1ced-0af5-4ac8-b206-358d4e269ffa!
	W1013 22:03:32.508409       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:03:32.518623       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-080337 -n no-preload-080337
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-080337 -n no-preload-080337: exit status 2 (382.467114ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-080337 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-080337
helpers_test.go:243: (dbg) docker inspect no-preload-080337:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "582c4b9df6d8b6770ac76b0dd560241ab3fa5f134c4e1b0fc0ec2cd0d08c34c8",
	        "Created": "2025-10-13T22:01:13.425171095Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 468753,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-13T22:02:31.767892266Z",
	            "FinishedAt": "2025-10-13T22:02:30.778671549Z"
	        },
	        "Image": "sha256:496f866fde942b6f85dc3d23428f27ed11a5cd69b522133c43b8dc97e7575c9e",
	        "ResolvConfPath": "/var/lib/docker/containers/582c4b9df6d8b6770ac76b0dd560241ab3fa5f134c4e1b0fc0ec2cd0d08c34c8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/582c4b9df6d8b6770ac76b0dd560241ab3fa5f134c4e1b0fc0ec2cd0d08c34c8/hostname",
	        "HostsPath": "/var/lib/docker/containers/582c4b9df6d8b6770ac76b0dd560241ab3fa5f134c4e1b0fc0ec2cd0d08c34c8/hosts",
	        "LogPath": "/var/lib/docker/containers/582c4b9df6d8b6770ac76b0dd560241ab3fa5f134c4e1b0fc0ec2cd0d08c34c8/582c4b9df6d8b6770ac76b0dd560241ab3fa5f134c4e1b0fc0ec2cd0d08c34c8-json.log",
	        "Name": "/no-preload-080337",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-080337:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-080337",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "582c4b9df6d8b6770ac76b0dd560241ab3fa5f134c4e1b0fc0ec2cd0d08c34c8",
	                "LowerDir": "/var/lib/docker/overlay2/c471c6160b15e3a21754875e4401849c13d42534f05e08f0d4d88218c5c26bf7-init/diff:/var/lib/docker/overlay2/d6236b573fee274727d414fe7dfb0718c2f1a4b8ebed995b4196b3231a8d31a4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c471c6160b15e3a21754875e4401849c13d42534f05e08f0d4d88218c5c26bf7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c471c6160b15e3a21754875e4401849c13d42534f05e08f0d4d88218c5c26bf7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c471c6160b15e3a21754875e4401849c13d42534f05e08f0d4d88218c5c26bf7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-080337",
	                "Source": "/var/lib/docker/volumes/no-preload-080337/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-080337",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-080337",
	                "name.minikube.sigs.k8s.io": "no-preload-080337",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d5640ab50be6b45d677fea13523620542458dfefc1b549685c4742db3ac5c731",
	            "SandboxKey": "/var/run/docker/netns/d5640ab50be6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33068"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33069"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33072"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33070"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33071"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-080337": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "c6:d2:b1:d8:f2:54",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "023fbfd0e79f229835d49fb4d5f52967eb961e42ade48e5f1189467342508af0",
	                    "EndpointID": "7159af935bc0a7b2fa3d899c89e433e68f46757dec0ffcaa15533f01e3d7b4b3",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-080337",
	                        "582c4b9df6d8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-080337 -n no-preload-080337
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-080337 -n no-preload-080337: exit status 2 (362.042823ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-080337 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-080337 logs -n 25: (1.385056633s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p cilium-200102 sudo crio config                                                                                                                                                                                                             │ cilium-200102                │ jenkins │ v1.37.0 │ 13 Oct 25 22:00 UTC │                     │
	│ delete  │ -p cilium-200102                                                                                                                                                                                                                              │ cilium-200102                │ jenkins │ v1.37.0 │ 13 Oct 25 22:00 UTC │ 13 Oct 25 22:00 UTC │
	│ start   │ -p old-k8s-version-534822 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-534822       │ jenkins │ v1.37.0 │ 13 Oct 25 22:00 UTC │ 13 Oct 25 22:01 UTC │
	│ delete  │ -p force-systemd-env-010902                                                                                                                                                                                                                   │ force-systemd-env-010902     │ jenkins │ v1.37.0 │ 13 Oct 25 22:01 UTC │ 13 Oct 25 22:01 UTC │
	│ start   │ -p no-preload-080337 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-080337            │ jenkins │ v1.37.0 │ 13 Oct 25 22:01 UTC │ 13 Oct 25 22:02 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-534822 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-534822       │ jenkins │ v1.37.0 │ 13 Oct 25 22:01 UTC │                     │
	│ stop    │ -p old-k8s-version-534822 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-534822       │ jenkins │ v1.37.0 │ 13 Oct 25 22:01 UTC │ 13 Oct 25 22:02 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-534822 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-534822       │ jenkins │ v1.37.0 │ 13 Oct 25 22:02 UTC │ 13 Oct 25 22:02 UTC │
	│ start   │ -p old-k8s-version-534822 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-534822       │ jenkins │ v1.37.0 │ 13 Oct 25 22:02 UTC │ 13 Oct 25 22:02 UTC │
	│ addons  │ enable metrics-server -p no-preload-080337 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-080337            │ jenkins │ v1.37.0 │ 13 Oct 25 22:02 UTC │                     │
	│ stop    │ -p no-preload-080337 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-080337            │ jenkins │ v1.37.0 │ 13 Oct 25 22:02 UTC │ 13 Oct 25 22:02 UTC │
	│ addons  │ enable dashboard -p no-preload-080337 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-080337            │ jenkins │ v1.37.0 │ 13 Oct 25 22:02 UTC │ 13 Oct 25 22:02 UTC │
	│ start   │ -p no-preload-080337 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-080337            │ jenkins │ v1.37.0 │ 13 Oct 25 22:02 UTC │ 13 Oct 25 22:03 UTC │
	│ image   │ old-k8s-version-534822 image list --format=json                                                                                                                                                                                               │ old-k8s-version-534822       │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │ 13 Oct 25 22:03 UTC │
	│ pause   │ -p old-k8s-version-534822 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-534822       │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │                     │
	│ start   │ -p kubernetes-upgrade-050146 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-050146    │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │                     │
	│ start   │ -p kubernetes-upgrade-050146 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-050146    │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │ 13 Oct 25 22:03 UTC │
	│ delete  │ -p old-k8s-version-534822                                                                                                                                                                                                                     │ old-k8s-version-534822       │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │ 13 Oct 25 22:03 UTC │
	│ delete  │ -p old-k8s-version-534822                                                                                                                                                                                                                     │ old-k8s-version-534822       │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │ 13 Oct 25 22:03 UTC │
	│ start   │ -p embed-certs-521669 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-521669           │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │                     │
	│ delete  │ -p kubernetes-upgrade-050146                                                                                                                                                                                                                  │ kubernetes-upgrade-050146    │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │ 13 Oct 25 22:03 UTC │
	│ delete  │ -p disable-driver-mounts-659143                                                                                                                                                                                                               │ disable-driver-mounts-659143 │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │ 13 Oct 25 22:03 UTC │
	│ start   │ -p default-k8s-diff-port-505851 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-505851 │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │                     │
	│ image   │ no-preload-080337 image list --format=json                                                                                                                                                                                                    │ no-preload-080337            │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │ 13 Oct 25 22:03 UTC │
	│ pause   │ -p no-preload-080337 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-080337            │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 22:03:15
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 22:03:15.737963  477441 out.go:360] Setting OutFile to fd 1 ...
	I1013 22:03:15.738301  477441 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:03:15.738312  477441 out.go:374] Setting ErrFile to fd 2...
	I1013 22:03:15.738316  477441 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:03:15.738557  477441 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-226873/.minikube/bin
	I1013 22:03:15.739095  477441 out.go:368] Setting JSON to false
	I1013 22:03:15.740395  477441 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6344,"bootTime":1760386652,"procs":473,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1013 22:03:15.740496  477441 start.go:141] virtualization: kvm guest
	I1013 22:03:15.742606  477441 out.go:179] * [default-k8s-diff-port-505851] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1013 22:03:15.744137  477441 notify.go:220] Checking for updates...
	I1013 22:03:15.744144  477441 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 22:03:15.745594  477441 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 22:03:15.747079  477441 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-226873/kubeconfig
	I1013 22:03:15.748294  477441 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-226873/.minikube
	I1013 22:03:15.749547  477441 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1013 22:03:15.750787  477441 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 22:03:15.752693  477441 config.go:182] Loaded profile config "cert-expiration-894101": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:03:15.752798  477441 config.go:182] Loaded profile config "embed-certs-521669": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:03:15.752917  477441 config.go:182] Loaded profile config "no-preload-080337": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:03:15.753060  477441 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 22:03:15.777943  477441 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1013 22:03:15.778093  477441 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 22:03:15.841292  477441 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-13 22:03:15.830505283 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652166656 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1013 22:03:15.841434  477441 docker.go:318] overlay module found
	I1013 22:03:15.844436  477441 out.go:179] * Using the docker driver based on user configuration
	I1013 22:03:15.845889  477441 start.go:305] selected driver: docker
	I1013 22:03:15.845911  477441 start.go:925] validating driver "docker" against <nil>
	I1013 22:03:15.845927  477441 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 22:03:15.846656  477441 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 22:03:15.914386  477441 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:72 OomKillDisable:false NGoroutines:90 SystemTime:2025-10-13 22:03:15.903775663 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652166656 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1013 22:03:15.914648  477441 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1013 22:03:15.914974  477441 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 22:03:15.917001  477441 out.go:179] * Using Docker driver with root privileges
	I1013 22:03:15.918170  477441 cni.go:84] Creating CNI manager for ""
	I1013 22:03:15.918255  477441 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 22:03:15.918272  477441 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1013 22:03:15.918359  477441 start.go:349] cluster config:
	{Name:default-k8s-diff-port-505851 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-505851 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 22:03:15.920053  477441 out.go:179] * Starting "default-k8s-diff-port-505851" primary control-plane node in "default-k8s-diff-port-505851" cluster
	I1013 22:03:15.921500  477441 cache.go:123] Beginning downloading kic base image for docker with crio
	I1013 22:03:15.922806  477441 out.go:179] * Pulling base image v0.0.48-1760363564-21724 ...
	I1013 22:03:15.923852  477441 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 22:03:15.923897  477441 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-226873/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1013 22:03:15.923910  477441 cache.go:58] Caching tarball of preloaded images
	I1013 22:03:15.923969  477441 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon
	I1013 22:03:15.924107  477441 preload.go:233] Found /home/jenkins/minikube-integration/21724-226873/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1013 22:03:15.924126  477441 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1013 22:03:15.924282  477441 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/default-k8s-diff-port-505851/config.json ...
	I1013 22:03:15.924315  477441 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/default-k8s-diff-port-505851/config.json: {Name:mkb4d5a74d02f3a2cdcdf9b4879867af4ffa44af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:03:15.946274  477441 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon, skipping pull
	I1013 22:03:15.946302  477441 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 exists in daemon, skipping load
	I1013 22:03:15.946320  477441 cache.go:232] Successfully downloaded all kic artifacts
	I1013 22:03:15.946355  477441 start.go:360] acquireMachinesLock for default-k8s-diff-port-505851: {Name:mkaf957bc5ced7f5c930a2e33ff0ee7c156af144 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 22:03:15.946463  477441 start.go:364] duration metric: took 87.124µs to acquireMachinesLock for "default-k8s-diff-port-505851"
	I1013 22:03:15.946496  477441 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-505851 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-505851 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1013 22:03:15.946599  477441 start.go:125] createHost starting for "" (driver="docker")
	I1013 22:03:11.432189  476377 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1013 22:03:11.432462  476377 start.go:159] libmachine.API.Create for "embed-certs-521669" (driver="docker")
	I1013 22:03:11.432500  476377 client.go:168] LocalClient.Create starting
	I1013 22:03:11.432577  476377 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem
	I1013 22:03:11.432620  476377 main.go:141] libmachine: Decoding PEM data...
	I1013 22:03:11.432646  476377 main.go:141] libmachine: Parsing certificate...
	I1013 22:03:11.432754  476377 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21724-226873/.minikube/certs/cert.pem
	I1013 22:03:11.432787  476377 main.go:141] libmachine: Decoding PEM data...
	I1013 22:03:11.432801  476377 main.go:141] libmachine: Parsing certificate...
	I1013 22:03:11.433249  476377 cli_runner.go:164] Run: docker network inspect embed-certs-521669 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1013 22:03:11.451243  476377 cli_runner.go:211] docker network inspect embed-certs-521669 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1013 22:03:11.451324  476377 network_create.go:284] running [docker network inspect embed-certs-521669] to gather additional debugging logs...
	I1013 22:03:11.451345  476377 cli_runner.go:164] Run: docker network inspect embed-certs-521669
	W1013 22:03:11.469447  476377 cli_runner.go:211] docker network inspect embed-certs-521669 returned with exit code 1
	I1013 22:03:11.469504  476377 network_create.go:287] error running [docker network inspect embed-certs-521669]: docker network inspect embed-certs-521669: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-521669 not found
	I1013 22:03:11.469533  476377 network_create.go:289] output of [docker network inspect embed-certs-521669]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-521669 not found
	
	** /stderr **
	I1013 22:03:11.469718  476377 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 22:03:11.487501  476377 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7d83a8e6a805 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ea:69:47:54:f9:98} reservation:<nil>}
	I1013 22:03:11.488158  476377 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-35c0cecee577 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:f2:41:bc:f8:12:32} reservation:<nil>}
	I1013 22:03:11.488770  476377 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-2e951fbeb08e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ba:fb:be:51:da:97} reservation:<nil>}
	I1013 22:03:11.489428  476377 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-c946d4d0529a IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:ea:85:25:23:b8:8e} reservation:<nil>}
	I1013 22:03:11.489866  476377 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-41a0a7263ae4 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:06:f3:d9:f6:e7:45} reservation:<nil>}
	I1013 22:03:11.490377  476377 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-023fbfd0e79f IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:9a:52:07:fb:e7:b6} reservation:<nil>}
	I1013 22:03:11.491218  476377 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001fc3720}
	I1013 22:03:11.491245  476377 network_create.go:124] attempt to create docker network embed-certs-521669 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1013 22:03:11.491297  476377 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-521669 embed-certs-521669
	I1013 22:03:11.554361  476377 network_create.go:108] docker network embed-certs-521669 192.168.103.0/24 created
	I1013 22:03:11.554390  476377 kic.go:121] calculated static IP "192.168.103.2" for the "embed-certs-521669" container
	I1013 22:03:11.554461  476377 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1013 22:03:11.574103  476377 cli_runner.go:164] Run: docker volume create embed-certs-521669 --label name.minikube.sigs.k8s.io=embed-certs-521669 --label created_by.minikube.sigs.k8s.io=true
	I1013 22:03:11.593698  476377 oci.go:103] Successfully created a docker volume embed-certs-521669
	I1013 22:03:11.593776  476377 cli_runner.go:164] Run: docker run --rm --name embed-certs-521669-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-521669 --entrypoint /usr/bin/test -v embed-certs-521669:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -d /var/lib
	I1013 22:03:12.027133  476377 oci.go:107] Successfully prepared a docker volume embed-certs-521669
	I1013 22:03:12.027174  476377 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 22:03:12.027196  476377 kic.go:194] Starting extracting preloaded images to volume ...
	I1013 22:03:12.027254  476377 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21724-226873/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-521669:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -I lz4 -xf /preloaded.tar -C /extractDir
	I1013 22:03:15.474512  476377 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21724-226873/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-521669:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -I lz4 -xf /preloaded.tar -C /extractDir: (3.447201807s)
	I1013 22:03:15.474548  476377 kic.go:203] duration metric: took 3.447347241s to extract preloaded images to volume ...
	W1013 22:03:15.474662  476377 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1013 22:03:15.474705  476377 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1013 22:03:15.474753  476377 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1013 22:03:15.537080  476377 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-521669 --name embed-certs-521669 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-521669 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-521669 --network embed-certs-521669 --ip 192.168.103.2 --volume embed-certs-521669:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225
	I1013 22:03:15.828585  476377 cli_runner.go:164] Run: docker container inspect embed-certs-521669 --format={{.State.Running}}
	I1013 22:03:15.849234  476377 cli_runner.go:164] Run: docker container inspect embed-certs-521669 --format={{.State.Status}}
	I1013 22:03:15.870675  476377 cli_runner.go:164] Run: docker exec embed-certs-521669 stat /var/lib/dpkg/alternatives/iptables
	I1013 22:03:15.924712  476377 oci.go:144] the created container "embed-certs-521669" has a running status.
	I1013 22:03:15.924742  476377 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21724-226873/.minikube/machines/embed-certs-521669/id_rsa...
	I1013 22:03:16.078015  476377 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21724-226873/.minikube/machines/embed-certs-521669/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1013 22:03:16.113130  476377 cli_runner.go:164] Run: docker container inspect embed-certs-521669 --format={{.State.Status}}
	I1013 22:03:16.134647  476377 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1013 22:03:16.134676  476377 kic_runner.go:114] Args: [docker exec --privileged embed-certs-521669 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1013 22:03:16.201077  476377 cli_runner.go:164] Run: docker container inspect embed-certs-521669 --format={{.State.Status}}
	W1013 22:03:13.910488  468497 pod_ready.go:104] pod "coredns-66bc5c9577-n6t7s" is not "Ready", error: <nil>
	W1013 22:03:15.910917  468497 pod_ready.go:104] pod "coredns-66bc5c9577-n6t7s" is not "Ready", error: <nil>
	I1013 22:03:16.910657  468497 pod_ready.go:94] pod "coredns-66bc5c9577-n6t7s" is "Ready"
	I1013 22:03:16.910686  468497 pod_ready.go:86] duration metric: took 34.006165322s for pod "coredns-66bc5c9577-n6t7s" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:03:16.913440  468497 pod_ready.go:83] waiting for pod "etcd-no-preload-080337" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:03:16.917936  468497 pod_ready.go:94] pod "etcd-no-preload-080337" is "Ready"
	I1013 22:03:16.917966  468497 pod_ready.go:86] duration metric: took 4.499065ms for pod "etcd-no-preload-080337" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:03:16.920321  468497 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-080337" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:03:16.924156  468497 pod_ready.go:94] pod "kube-apiserver-no-preload-080337" is "Ready"
	I1013 22:03:16.924176  468497 pod_ready.go:86] duration metric: took 3.835719ms for pod "kube-apiserver-no-preload-080337" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:03:16.926302  468497 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-080337" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:03:17.109757  468497 pod_ready.go:94] pod "kube-controller-manager-no-preload-080337" is "Ready"
	I1013 22:03:17.109793  468497 pod_ready.go:86] duration metric: took 183.46409ms for pod "kube-controller-manager-no-preload-080337" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:03:17.309044  468497 pod_ready.go:83] waiting for pod "kube-proxy-2scrx" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:03:17.708022  468497 pod_ready.go:94] pod "kube-proxy-2scrx" is "Ready"
	I1013 22:03:17.708055  468497 pod_ready.go:86] duration metric: took 398.979909ms for pod "kube-proxy-2scrx" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:03:17.908508  468497 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-080337" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:03:18.308756  468497 pod_ready.go:94] pod "kube-scheduler-no-preload-080337" is "Ready"
	I1013 22:03:18.308787  468497 pod_ready.go:86] duration metric: took 400.253383ms for pod "kube-scheduler-no-preload-080337" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:03:18.308803  468497 pod_ready.go:40] duration metric: took 35.407537273s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 22:03:18.364736  468497 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1013 22:03:18.368024  468497 out.go:179] * Done! kubectl is now configured to use "no-preload-080337" cluster and "default" namespace by default
	I1013 22:03:15.952136  477441 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1013 22:03:15.952407  477441 start.go:159] libmachine.API.Create for "default-k8s-diff-port-505851" (driver="docker")
	I1013 22:03:15.952448  477441 client.go:168] LocalClient.Create starting
	I1013 22:03:15.952537  477441 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem
	I1013 22:03:15.952579  477441 main.go:141] libmachine: Decoding PEM data...
	I1013 22:03:15.952609  477441 main.go:141] libmachine: Parsing certificate...
	I1013 22:03:15.952708  477441 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21724-226873/.minikube/certs/cert.pem
	I1013 22:03:15.952739  477441 main.go:141] libmachine: Decoding PEM data...
	I1013 22:03:15.952753  477441 main.go:141] libmachine: Parsing certificate...
	I1013 22:03:15.953187  477441 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-505851 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1013 22:03:15.972246  477441 cli_runner.go:211] docker network inspect default-k8s-diff-port-505851 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1013 22:03:15.972332  477441 network_create.go:284] running [docker network inspect default-k8s-diff-port-505851] to gather additional debugging logs...
	I1013 22:03:15.972356  477441 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-505851
	W1013 22:03:15.996117  477441 cli_runner.go:211] docker network inspect default-k8s-diff-port-505851 returned with exit code 1
	I1013 22:03:15.996179  477441 network_create.go:287] error running [docker network inspect default-k8s-diff-port-505851]: docker network inspect default-k8s-diff-port-505851: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-505851 not found
	I1013 22:03:15.996198  477441 network_create.go:289] output of [docker network inspect default-k8s-diff-port-505851]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-505851 not found
	
	** /stderr **
	I1013 22:03:15.996356  477441 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 22:03:16.016963  477441 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7d83a8e6a805 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ea:69:47:54:f9:98} reservation:<nil>}
	I1013 22:03:16.018030  477441 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-35c0cecee577 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:f2:41:bc:f8:12:32} reservation:<nil>}
	I1013 22:03:16.019112  477441 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-2e951fbeb08e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ba:fb:be:51:da:97} reservation:<nil>}
	I1013 22:03:16.020274  477441 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f5ac90}
	I1013 22:03:16.020302  477441 network_create.go:124] attempt to create docker network default-k8s-diff-port-505851 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1013 22:03:16.020372  477441 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-505851 default-k8s-diff-port-505851
	I1013 22:03:16.089396  477441 network_create.go:108] docker network default-k8s-diff-port-505851 192.168.76.0/24 created
	I1013 22:03:16.089432  477441 kic.go:121] calculated static IP "192.168.76.2" for the "default-k8s-diff-port-505851" container
	I1013 22:03:16.089503  477441 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1013 22:03:16.116271  477441 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-505851 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-505851 --label created_by.minikube.sigs.k8s.io=true
	I1013 22:03:16.139909  477441 oci.go:103] Successfully created a docker volume default-k8s-diff-port-505851
	I1013 22:03:16.140041  477441 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-505851-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-505851 --entrypoint /usr/bin/test -v default-k8s-diff-port-505851:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -d /var/lib
	I1013 22:03:16.606803  477441 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-505851
	I1013 22:03:16.606851  477441 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 22:03:16.606878  477441 kic.go:194] Starting extracting preloaded images to volume ...
	I1013 22:03:16.606961  477441 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21724-226873/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-505851:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -I lz4 -xf /preloaded.tar -C /extractDir
	I1013 22:03:16.221362  476377 machine.go:93] provisionDockerMachine start ...
	I1013 22:03:16.221469  476377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-521669
	I1013 22:03:16.245621  476377 main.go:141] libmachine: Using SSH client type: native
	I1013 22:03:16.245941  476377 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1013 22:03:16.245962  476377 main.go:141] libmachine: About to run SSH command:
	hostname
	I1013 22:03:16.394047  476377 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-521669
	
	I1013 22:03:16.394082  476377 ubuntu.go:182] provisioning hostname "embed-certs-521669"
	I1013 22:03:16.394163  476377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-521669
	I1013 22:03:16.416457  476377 main.go:141] libmachine: Using SSH client type: native
	I1013 22:03:16.416731  476377 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1013 22:03:16.416790  476377 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-521669 && echo "embed-certs-521669" | sudo tee /etc/hostname
	I1013 22:03:16.587752  476377 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-521669
	
	I1013 22:03:16.587863  476377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-521669
	I1013 22:03:16.610262  476377 main.go:141] libmachine: Using SSH client type: native
	I1013 22:03:16.610551  476377 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1013 22:03:16.610573  476377 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-521669' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-521669/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-521669' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1013 22:03:16.755473  476377 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 22:03:16.755504  476377 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21724-226873/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-226873/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-226873/.minikube}
	I1013 22:03:16.755553  476377 ubuntu.go:190] setting up certificates
	I1013 22:03:16.755569  476377 provision.go:84] configureAuth start
	I1013 22:03:16.755641  476377 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-521669
	I1013 22:03:16.775591  476377 provision.go:143] copyHostCerts
	I1013 22:03:16.775664  476377 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-226873/.minikube/ca.pem, removing ...
	I1013 22:03:16.775673  476377 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-226873/.minikube/ca.pem
	I1013 22:03:16.775737  476377 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-226873/.minikube/ca.pem (1078 bytes)
	I1013 22:03:16.775854  476377 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-226873/.minikube/cert.pem, removing ...
	I1013 22:03:16.775868  476377 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-226873/.minikube/cert.pem
	I1013 22:03:16.775898  476377 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-226873/.minikube/cert.pem (1123 bytes)
	I1013 22:03:16.775988  476377 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-226873/.minikube/key.pem, removing ...
	I1013 22:03:16.776013  476377 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-226873/.minikube/key.pem
	I1013 22:03:16.776048  476377 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-226873/.minikube/key.pem (1679 bytes)
	I1013 22:03:16.776176  476377 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-226873/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca-key.pem org=jenkins.embed-certs-521669 san=[127.0.0.1 192.168.103.2 embed-certs-521669 localhost minikube]
	I1013 22:03:17.290608  476377 provision.go:177] copyRemoteCerts
	I1013 22:03:17.290671  476377 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1013 22:03:17.290709  476377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-521669
	I1013 22:03:17.311404  476377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/embed-certs-521669/id_rsa Username:docker}
	I1013 22:03:17.415094  476377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1013 22:03:17.442565  476377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1013 22:03:17.460884  476377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1013 22:03:17.480889  476377 provision.go:87] duration metric: took 725.302266ms to configureAuth
	I1013 22:03:17.480917  476377 ubuntu.go:206] setting minikube options for container-runtime
	I1013 22:03:17.481122  476377 config.go:182] Loaded profile config "embed-certs-521669": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:03:17.481243  476377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-521669
	I1013 22:03:17.500948  476377 main.go:141] libmachine: Using SSH client type: native
	I1013 22:03:17.501305  476377 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1013 22:03:17.501336  476377 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1013 22:03:17.783274  476377 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1013 22:03:17.783307  476377 machine.go:96] duration metric: took 1.561917857s to provisionDockerMachine
	I1013 22:03:17.783317  476377 client.go:171] duration metric: took 6.350807262s to LocalClient.Create
	I1013 22:03:17.783331  476377 start.go:167] duration metric: took 6.350874531s to libmachine.API.Create "embed-certs-521669"
	I1013 22:03:17.783340  476377 start.go:293] postStartSetup for "embed-certs-521669" (driver="docker")
	I1013 22:03:17.783352  476377 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1013 22:03:17.783422  476377 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1013 22:03:17.783470  476377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-521669
	I1013 22:03:17.803863  476377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/embed-certs-521669/id_rsa Username:docker}
	I1013 22:03:17.907015  476377 ssh_runner.go:195] Run: cat /etc/os-release
	I1013 22:03:17.911487  476377 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1013 22:03:17.911525  476377 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1013 22:03:17.911539  476377 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-226873/.minikube/addons for local assets ...
	I1013 22:03:17.911612  476377 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-226873/.minikube/files for local assets ...
	I1013 22:03:17.911736  476377 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-226873/.minikube/files/etc/ssl/certs/2309292.pem -> 2309292.pem in /etc/ssl/certs
	I1013 22:03:17.911878  476377 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1013 22:03:17.920464  476377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/files/etc/ssl/certs/2309292.pem --> /etc/ssl/certs/2309292.pem (1708 bytes)
	I1013 22:03:17.944107  476377 start.go:296] duration metric: took 160.751032ms for postStartSetup
	I1013 22:03:17.944526  476377 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-521669
	I1013 22:03:17.962986  476377 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/embed-certs-521669/config.json ...
	I1013 22:03:17.963368  476377 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1013 22:03:17.963433  476377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-521669
	I1013 22:03:17.982848  476377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/embed-certs-521669/id_rsa Username:docker}
	I1013 22:03:18.080160  476377 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1013 22:03:18.084919  476377 start.go:128] duration metric: took 6.654639128s to createHost
	I1013 22:03:18.084950  476377 start.go:83] releasing machines lock for "embed-certs-521669", held for 6.65478014s
	I1013 22:03:18.085047  476377 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-521669
	I1013 22:03:18.103381  476377 ssh_runner.go:195] Run: cat /version.json
	I1013 22:03:18.103445  476377 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1013 22:03:18.103454  476377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-521669
	I1013 22:03:18.103538  476377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-521669
	I1013 22:03:18.124826  476377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/embed-certs-521669/id_rsa Username:docker}
	I1013 22:03:18.125175  476377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/embed-certs-521669/id_rsa Username:docker}
	I1013 22:03:18.282543  476377 ssh_runner.go:195] Run: systemctl --version
	I1013 22:03:18.289969  476377 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1013 22:03:18.331007  476377 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1013 22:03:18.336354  476377 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1013 22:03:18.336433  476377 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1013 22:03:18.370656  476377 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1013 22:03:18.370685  476377 start.go:495] detecting cgroup driver to use...
	I1013 22:03:18.370719  476377 detect.go:190] detected "systemd" cgroup driver on host os
	I1013 22:03:18.370790  476377 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1013 22:03:18.390616  476377 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1013 22:03:18.407690  476377 docker.go:218] disabling cri-docker service (if available) ...
	I1013 22:03:18.407749  476377 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1013 22:03:18.429867  476377 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1013 22:03:18.453509  476377 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1013 22:03:18.551968  476377 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1013 22:03:18.655209  476377 docker.go:234] disabling docker service ...
	I1013 22:03:18.655294  476377 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1013 22:03:18.684426  476377 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1013 22:03:18.699901  476377 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1013 22:03:18.806311  476377 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1013 22:03:18.892541  476377 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1013 22:03:18.907217  476377 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1013 22:03:18.924027  476377 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1013 22:03:18.924084  476377 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:03:18.938177  476377 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1013 22:03:18.938264  476377 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:03:18.949869  476377 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:03:18.961316  476377 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:03:18.972845  476377 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1013 22:03:18.991342  476377 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:03:19.002231  476377 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:03:19.023848  476377 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:03:19.043774  476377 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1013 22:03:19.053204  476377 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1013 22:03:19.061638  476377 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:03:19.149544  476377 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1013 22:03:21.338723  476377 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.189138984s)
	I1013 22:03:21.338760  476377 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1013 22:03:21.338817  476377 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1013 22:03:21.343675  476377 start.go:563] Will wait 60s for crictl version
	I1013 22:03:21.343812  476377 ssh_runner.go:195] Run: which crictl
	I1013 22:03:21.348134  476377 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1013 22:03:21.378299  476377 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1013 22:03:21.378394  476377 ssh_runner.go:195] Run: crio --version
	I1013 22:03:21.413031  476377 ssh_runner.go:195] Run: crio --version
	I1013 22:03:21.450173  476377 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1013 22:03:21.451796  476377 cli_runner.go:164] Run: docker network inspect embed-certs-521669 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 22:03:21.472239  476377 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1013 22:03:21.477215  476377 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 22:03:21.489114  476377 kubeadm.go:883] updating cluster {Name:embed-certs-521669 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-521669 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1013 22:03:21.489245  476377 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 22:03:21.489306  476377 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 22:03:21.527713  476377 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 22:03:21.527735  476377 crio.go:433] Images already preloaded, skipping extraction
	I1013 22:03:21.527786  476377 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 22:03:21.558294  476377 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 22:03:21.558320  476377 cache_images.go:85] Images are preloaded, skipping loading
	I1013 22:03:21.558330  476377 kubeadm.go:934] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1013 22:03:21.558445  476377 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-521669 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-521669 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1013 22:03:21.558545  476377 ssh_runner.go:195] Run: crio config
	I1013 22:03:21.609496  476377 cni.go:84] Creating CNI manager for ""
	I1013 22:03:21.609524  476377 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 22:03:21.609547  476377 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1013 22:03:21.609579  476377 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-521669 NodeName:embed-certs-521669 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1013 22:03:21.609761  476377 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-521669"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1013 22:03:21.609832  476377 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1013 22:03:21.618769  476377 binaries.go:44] Found k8s binaries, skipping transfer
	I1013 22:03:21.618857  476377 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1013 22:03:21.629119  476377 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (369 bytes)
	I1013 22:03:21.644312  476377 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1013 22:03:21.663903  476377 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I1013 22:03:21.680429  476377 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1013 22:03:21.684505  476377 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 22:03:21.695900  476377 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:03:21.790892  476377 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 22:03:21.815795  476377 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/embed-certs-521669 for IP: 192.168.103.2
	I1013 22:03:21.815822  476377 certs.go:195] generating shared ca certs ...
	I1013 22:03:21.815840  476377 certs.go:227] acquiring lock for ca certs: {Name:mk5abdb742abbab05bc35d961f97579f54806d56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:03:21.816024  476377 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-226873/.minikube/ca.key
	I1013 22:03:21.816092  476377 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-226873/.minikube/proxy-client-ca.key
	I1013 22:03:21.816108  476377 certs.go:257] generating profile certs ...
	I1013 22:03:21.816175  476377 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/embed-certs-521669/client.key
	I1013 22:03:21.816199  476377 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/embed-certs-521669/client.crt with IP's: []
	I1013 22:03:22.052423  476377 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/embed-certs-521669/client.crt ...
	I1013 22:03:22.052450  476377 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/embed-certs-521669/client.crt: {Name:mkbb345b9e3c6179c3a1a0679dee2b90878ff68f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:03:22.052616  476377 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/embed-certs-521669/client.key ...
	I1013 22:03:22.052626  476377 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/embed-certs-521669/client.key: {Name:mk591cc5b0a51a208e850f5205e0170f11155221 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:03:22.052707  476377 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/embed-certs-521669/apiserver.key.12eccb79
	I1013 22:03:22.052717  476377 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/embed-certs-521669/apiserver.crt.12eccb79 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1013 22:03:22.067243  476377 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/embed-certs-521669/apiserver.crt.12eccb79 ...
	I1013 22:03:22.067274  476377 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/embed-certs-521669/apiserver.crt.12eccb79: {Name:mk0b4fe38a1afbd912ef623383b5de00796c2fcf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:03:22.067447  476377 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/embed-certs-521669/apiserver.key.12eccb79 ...
	I1013 22:03:22.067463  476377 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/embed-certs-521669/apiserver.key.12eccb79: {Name:mkb277f2c351051028070831936aea78f46fa5cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:03:22.067540  476377 certs.go:382] copying /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/embed-certs-521669/apiserver.crt.12eccb79 -> /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/embed-certs-521669/apiserver.crt
	I1013 22:03:22.067626  476377 certs.go:386] copying /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/embed-certs-521669/apiserver.key.12eccb79 -> /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/embed-certs-521669/apiserver.key
	I1013 22:03:22.067686  476377 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/embed-certs-521669/proxy-client.key
	I1013 22:03:22.067702  476377 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/embed-certs-521669/proxy-client.crt with IP's: []
	I1013 22:03:22.346725  476377 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/embed-certs-521669/proxy-client.crt ...
	I1013 22:03:22.346762  476377 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/embed-certs-521669/proxy-client.crt: {Name:mk1f76bf76f25c455937efcd4676a0ac1e68b953 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:03:22.346936  476377 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/embed-certs-521669/proxy-client.key ...
	I1013 22:03:22.346951  476377 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/embed-certs-521669/proxy-client.key: {Name:mk3e8041d1eb09d22e7cd1e2cfa12be080df28b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:03:22.347157  476377 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/230929.pem (1338 bytes)
	W1013 22:03:22.347197  476377 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-226873/.minikube/certs/230929_empty.pem, impossibly tiny 0 bytes
	I1013 22:03:22.347209  476377 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca-key.pem (1675 bytes)
	I1013 22:03:22.347249  476377 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem (1078 bytes)
	I1013 22:03:22.347271  476377 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/cert.pem (1123 bytes)
	I1013 22:03:22.347293  476377 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/key.pem (1679 bytes)
	I1013 22:03:22.347330  476377 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/files/etc/ssl/certs/2309292.pem (1708 bytes)
	I1013 22:03:22.348026  476377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1013 22:03:22.366963  476377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1013 22:03:22.386244  476377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1013 22:03:22.406535  476377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1013 22:03:22.425944  476377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/embed-certs-521669/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1013 22:03:22.444800  476377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/embed-certs-521669/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1013 22:03:22.463641  476377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/embed-certs-521669/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1013 22:03:22.482633  476377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/embed-certs-521669/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1013 22:03:22.502058  476377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/files/etc/ssl/certs/2309292.pem --> /usr/share/ca-certificates/2309292.pem (1708 bytes)
	I1013 22:03:22.522062  476377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1013 22:03:22.541313  476377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/certs/230929.pem --> /usr/share/ca-certificates/230929.pem (1338 bytes)
	I1013 22:03:22.560633  476377 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1013 22:03:22.573837  476377 ssh_runner.go:195] Run: openssl version
	I1013 22:03:22.580595  476377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/230929.pem && ln -fs /usr/share/ca-certificates/230929.pem /etc/ssl/certs/230929.pem"
	I1013 22:03:22.589689  476377 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/230929.pem
	I1013 22:03:22.593692  476377 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 13 21:24 /usr/share/ca-certificates/230929.pem
	I1013 22:03:22.593750  476377 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/230929.pem
	I1013 22:03:22.627884  476377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/230929.pem /etc/ssl/certs/51391683.0"
	I1013 22:03:22.637233  476377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2309292.pem && ln -fs /usr/share/ca-certificates/2309292.pem /etc/ssl/certs/2309292.pem"
	I1013 22:03:22.646382  476377 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2309292.pem
	I1013 22:03:22.650916  476377 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 13 21:24 /usr/share/ca-certificates/2309292.pem
	I1013 22:03:22.651016  476377 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2309292.pem
	I1013 22:03:22.686723  476377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2309292.pem /etc/ssl/certs/3ec20f2e.0"
	I1013 22:03:22.696518  476377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1013 22:03:22.706194  476377 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:03:22.710674  476377 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 13 21:18 /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:03:22.710743  476377 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:03:22.749198  476377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1013 22:03:22.758410  476377 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1013 22:03:22.762370  476377 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1013 22:03:22.762430  476377 kubeadm.go:400] StartCluster: {Name:embed-certs-521669 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-521669 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 22:03:22.762507  476377 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 22:03:22.762579  476377 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 22:03:22.792836  476377 cri.go:89] found id: ""
	I1013 22:03:22.792905  476377 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1013 22:03:22.801525  476377 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1013 22:03:22.809523  476377 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1013 22:03:22.809582  476377 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1013 22:03:22.817337  476377 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1013 22:03:22.817353  476377 kubeadm.go:157] found existing configuration files:
	
	I1013 22:03:22.817403  476377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1013 22:03:22.825291  476377 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1013 22:03:22.825347  476377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1013 22:03:22.833187  476377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1013 22:03:22.840934  476377 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1013 22:03:22.840983  476377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1013 22:03:22.849131  476377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1013 22:03:22.857090  476377 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1013 22:03:22.857148  476377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1013 22:03:22.864854  476377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1013 22:03:22.874181  476377 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1013 22:03:22.874241  476377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1013 22:03:22.882804  476377 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1013 22:03:22.924902  476377 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1013 22:03:22.924967  476377 kubeadm.go:318] [preflight] Running pre-flight checks
	I1013 22:03:22.947708  476377 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1013 22:03:22.947825  476377 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1013 22:03:22.947906  476377 kubeadm.go:318] OS: Linux
	I1013 22:03:22.947983  476377 kubeadm.go:318] CGROUPS_CPU: enabled
	I1013 22:03:22.948057  476377 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1013 22:03:22.948126  476377 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1013 22:03:22.948221  476377 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1013 22:03:22.948273  476377 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1013 22:03:22.948353  476377 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1013 22:03:22.948409  476377 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1013 22:03:22.948502  476377 kubeadm.go:318] CGROUPS_IO: enabled
	I1013 22:03:23.011520  476377 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1013 22:03:23.011660  476377 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1013 22:03:23.011794  476377 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1013 22:03:23.019656  476377 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1013 22:03:21.243680  477441 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21724-226873/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-505851:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -I lz4 -xf /preloaded.tar -C /extractDir: (4.636646298s)
	I1013 22:03:21.243714  477441 kic.go:203] duration metric: took 4.636832025s to extract preloaded images to volume ...
	W1013 22:03:21.243794  477441 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1013 22:03:21.243828  477441 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1013 22:03:21.243864  477441 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1013 22:03:21.308267  477441 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-505851 --name default-k8s-diff-port-505851 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-505851 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-505851 --network default-k8s-diff-port-505851 --ip 192.168.76.2 --volume default-k8s-diff-port-505851:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225
	I1013 22:03:21.606891  477441 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-505851 --format={{.State.Running}}
	I1013 22:03:21.628447  477441 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-505851 --format={{.State.Status}}
	I1013 22:03:21.647849  477441 cli_runner.go:164] Run: docker exec default-k8s-diff-port-505851 stat /var/lib/dpkg/alternatives/iptables
	I1013 22:03:21.699575  477441 oci.go:144] the created container "default-k8s-diff-port-505851" has a running status.
	I1013 22:03:21.699614  477441 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21724-226873/.minikube/machines/default-k8s-diff-port-505851/id_rsa...
	I1013 22:03:21.906662  477441 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21724-226873/.minikube/machines/default-k8s-diff-port-505851/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1013 22:03:21.943935  477441 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-505851 --format={{.State.Status}}
	I1013 22:03:21.967577  477441 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1013 22:03:21.967604  477441 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-505851 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1013 22:03:22.023950  477441 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-505851 --format={{.State.Status}}
	I1013 22:03:22.046271  477441 machine.go:93] provisionDockerMachine start ...
	I1013 22:03:22.046396  477441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-505851
	I1013 22:03:22.065060  477441 main.go:141] libmachine: Using SSH client type: native
	I1013 22:03:22.065390  477441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1013 22:03:22.065408  477441 main.go:141] libmachine: About to run SSH command:
	hostname
	I1013 22:03:22.206150  477441 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-505851
	
	I1013 22:03:22.206185  477441 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-505851"
	I1013 22:03:22.206259  477441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-505851
	I1013 22:03:22.225627  477441 main.go:141] libmachine: Using SSH client type: native
	I1013 22:03:22.225927  477441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1013 22:03:22.225950  477441 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-505851 && echo "default-k8s-diff-port-505851" | sudo tee /etc/hostname
	I1013 22:03:22.379118  477441 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-505851
	
	I1013 22:03:22.379216  477441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-505851
	I1013 22:03:22.399213  477441 main.go:141] libmachine: Using SSH client type: native
	I1013 22:03:22.399441  477441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1013 22:03:22.399467  477441 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-505851' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-505851/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-505851' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1013 22:03:22.538187  477441 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 22:03:22.538222  477441 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21724-226873/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-226873/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-226873/.minikube}
	I1013 22:03:22.538281  477441 ubuntu.go:190] setting up certificates
	I1013 22:03:22.538299  477441 provision.go:84] configureAuth start
	I1013 22:03:22.538373  477441 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-505851
	I1013 22:03:22.558015  477441 provision.go:143] copyHostCerts
	I1013 22:03:22.558079  477441 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-226873/.minikube/ca.pem, removing ...
	I1013 22:03:22.558091  477441 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-226873/.minikube/ca.pem
	I1013 22:03:22.558151  477441 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-226873/.minikube/ca.pem (1078 bytes)
	I1013 22:03:22.558243  477441 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-226873/.minikube/cert.pem, removing ...
	I1013 22:03:22.558251  477441 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-226873/.minikube/cert.pem
	I1013 22:03:22.558277  477441 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-226873/.minikube/cert.pem (1123 bytes)
	I1013 22:03:22.558354  477441 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-226873/.minikube/key.pem, removing ...
	I1013 22:03:22.558366  477441 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-226873/.minikube/key.pem
	I1013 22:03:22.558401  477441 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-226873/.minikube/key.pem (1679 bytes)
	I1013 22:03:22.558507  477441 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-226873/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-505851 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-505851 localhost minikube]
	I1013 22:03:22.863338  477441 provision.go:177] copyRemoteCerts
	I1013 22:03:22.863403  477441 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1013 22:03:22.863462  477441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-505851
	I1013 22:03:22.882550  477441 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/default-k8s-diff-port-505851/id_rsa Username:docker}
	I1013 22:03:22.983590  477441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1013 22:03:23.004815  477441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1013 22:03:23.025253  477441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1013 22:03:23.043559  477441 provision.go:87] duration metric: took 505.243372ms to configureAuth
	I1013 22:03:23.043592  477441 ubuntu.go:206] setting minikube options for container-runtime
	I1013 22:03:23.043750  477441 config.go:182] Loaded profile config "default-k8s-diff-port-505851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:03:23.043851  477441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-505851
	I1013 22:03:23.061730  477441 main.go:141] libmachine: Using SSH client type: native
	I1013 22:03:23.062029  477441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1013 22:03:23.062056  477441 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1013 22:03:23.314467  477441 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1013 22:03:23.314494  477441 machine.go:96] duration metric: took 1.268196643s to provisionDockerMachine
	I1013 22:03:23.314508  477441 client.go:171] duration metric: took 7.362049203s to LocalClient.Create
	I1013 22:03:23.314535  477441 start.go:167] duration metric: took 7.362130092s to libmachine.API.Create "default-k8s-diff-port-505851"
	I1013 22:03:23.314546  477441 start.go:293] postStartSetup for "default-k8s-diff-port-505851" (driver="docker")
	I1013 22:03:23.314561  477441 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1013 22:03:23.314628  477441 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1013 22:03:23.314680  477441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-505851
	I1013 22:03:23.332760  477441 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/default-k8s-diff-port-505851/id_rsa Username:docker}
	I1013 22:03:23.434502  477441 ssh_runner.go:195] Run: cat /etc/os-release
	I1013 22:03:23.438499  477441 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1013 22:03:23.438536  477441 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1013 22:03:23.438552  477441 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-226873/.minikube/addons for local assets ...
	I1013 22:03:23.438617  477441 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-226873/.minikube/files for local assets ...
	I1013 22:03:23.438725  477441 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-226873/.minikube/files/etc/ssl/certs/2309292.pem -> 2309292.pem in /etc/ssl/certs
	I1013 22:03:23.438847  477441 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1013 22:03:23.447664  477441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/files/etc/ssl/certs/2309292.pem --> /etc/ssl/certs/2309292.pem (1708 bytes)
	I1013 22:03:23.471585  477441 start.go:296] duration metric: took 157.023965ms for postStartSetup
	I1013 22:03:23.472033  477441 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-505851
	I1013 22:03:23.492268  477441 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/default-k8s-diff-port-505851/config.json ...
	I1013 22:03:23.492537  477441 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1013 22:03:23.492587  477441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-505851
	I1013 22:03:23.510553  477441 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/default-k8s-diff-port-505851/id_rsa Username:docker}
	I1013 22:03:23.606834  477441 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1013 22:03:23.612228  477441 start.go:128] duration metric: took 7.665608031s to createHost
	I1013 22:03:23.612262  477441 start.go:83] releasing machines lock for "default-k8s-diff-port-505851", held for 7.665780889s
	I1013 22:03:23.612335  477441 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-505851
	I1013 22:03:23.631161  477441 ssh_runner.go:195] Run: cat /version.json
	I1013 22:03:23.631211  477441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-505851
	I1013 22:03:23.631222  477441 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1013 22:03:23.631306  477441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-505851
	I1013 22:03:23.652666  477441 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/default-k8s-diff-port-505851/id_rsa Username:docker}
	I1013 22:03:23.652908  477441 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/default-k8s-diff-port-505851/id_rsa Username:docker}
	I1013 22:03:23.806321  477441 ssh_runner.go:195] Run: systemctl --version
	I1013 22:03:23.813376  477441 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1013 22:03:23.850264  477441 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1013 22:03:23.855282  477441 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1013 22:03:23.855348  477441 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1013 22:03:23.882368  477441 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1013 22:03:23.882397  477441 start.go:495] detecting cgroup driver to use...
	I1013 22:03:23.882432  477441 detect.go:190] detected "systemd" cgroup driver on host os
	I1013 22:03:23.882477  477441 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1013 22:03:23.904893  477441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1013 22:03:23.919589  477441 docker.go:218] disabling cri-docker service (if available) ...
	I1013 22:03:23.919649  477441 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1013 22:03:23.937455  477441 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1013 22:03:23.955182  477441 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1013 22:03:24.041834  477441 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1013 22:03:24.130369  477441 docker.go:234] disabling docker service ...
	I1013 22:03:24.130441  477441 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1013 22:03:24.150665  477441 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1013 22:03:24.163903  477441 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1013 22:03:24.258822  477441 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1013 22:03:24.345840  477441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1013 22:03:24.358981  477441 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1013 22:03:24.373762  477441 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1013 22:03:24.373829  477441 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:03:24.384377  477441 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1013 22:03:24.384454  477441 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:03:24.394114  477441 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:03:24.403356  477441 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:03:24.413541  477441 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1013 22:03:24.422178  477441 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:03:24.431112  477441 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:03:24.444868  477441 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:03:24.454055  477441 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1013 22:03:24.461846  477441 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1013 22:03:24.469362  477441 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:03:24.563338  477441 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1013 22:03:24.673439  477441 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1013 22:03:24.673498  477441 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1013 22:03:24.677858  477441 start.go:563] Will wait 60s for crictl version
	I1013 22:03:24.677922  477441 ssh_runner.go:195] Run: which crictl
	I1013 22:03:24.681662  477441 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1013 22:03:24.708055  477441 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1013 22:03:24.708153  477441 ssh_runner.go:195] Run: crio --version
	I1013 22:03:24.738483  477441 ssh_runner.go:195] Run: crio --version
	I1013 22:03:24.773240  477441 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1013 22:03:24.774790  477441 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-505851 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 22:03:24.792572  477441 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1013 22:03:24.797008  477441 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 22:03:24.807712  477441 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-505851 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-505851 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1013 22:03:24.807869  477441 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 22:03:24.807933  477441 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 22:03:24.842389  477441 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 22:03:24.842412  477441 crio.go:433] Images already preloaded, skipping extraction
	I1013 22:03:24.842471  477441 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 22:03:24.869559  477441 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 22:03:24.869585  477441 cache_images.go:85] Images are preloaded, skipping loading
	I1013 22:03:24.869593  477441 kubeadm.go:934] updating node { 192.168.76.2 8444 v1.34.1 crio true true} ...
	I1013 22:03:24.869699  477441 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-505851 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-505851 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1013 22:03:24.869775  477441 ssh_runner.go:195] Run: crio config
	I1013 22:03:24.919344  477441 cni.go:84] Creating CNI manager for ""
	I1013 22:03:24.919375  477441 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 22:03:24.919397  477441 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1013 22:03:24.919425  477441 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-505851 NodeName:default-k8s-diff-port-505851 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1013 22:03:24.919579  477441 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-505851"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1013 22:03:24.919653  477441 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1013 22:03:24.929378  477441 binaries.go:44] Found k8s binaries, skipping transfer
	I1013 22:03:24.929453  477441 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1013 22:03:24.937831  477441 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1013 22:03:24.952469  477441 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1013 22:03:24.970115  477441 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1013 22:03:24.984729  477441 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1013 22:03:24.988771  477441 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 22:03:24.999302  477441 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:03:25.080214  477441 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 22:03:25.105985  477441 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/default-k8s-diff-port-505851 for IP: 192.168.76.2
	I1013 22:03:25.106029  477441 certs.go:195] generating shared ca certs ...
	I1013 22:03:25.106052  477441 certs.go:227] acquiring lock for ca certs: {Name:mk5abdb742abbab05bc35d961f97579f54806d56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:03:25.106216  477441 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-226873/.minikube/ca.key
	I1013 22:03:25.106272  477441 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-226873/.minikube/proxy-client-ca.key
	I1013 22:03:25.106284  477441 certs.go:257] generating profile certs ...
	I1013 22:03:25.106359  477441 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/default-k8s-diff-port-505851/client.key
	I1013 22:03:25.106388  477441 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/default-k8s-diff-port-505851/client.crt with IP's: []
	I1013 22:03:25.419846  477441 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/default-k8s-diff-port-505851/client.crt ...
	I1013 22:03:25.419885  477441 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/default-k8s-diff-port-505851/client.crt: {Name:mk728e00aa172d5cca8ad66682bc4e98e7a15542 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:03:25.420119  477441 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/default-k8s-diff-port-505851/client.key ...
	I1013 22:03:25.420139  477441 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/default-k8s-diff-port-505851/client.key: {Name:mk319ddb7ff837a49040402151969c7b02d6de6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:03:25.420271  477441 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/default-k8s-diff-port-505851/apiserver.key.f604c011
	I1013 22:03:25.420290  477441 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/default-k8s-diff-port-505851/apiserver.crt.f604c011 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1013 22:03:25.711316  477441 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/default-k8s-diff-port-505851/apiserver.crt.f604c011 ...
	I1013 22:03:25.711345  477441 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/default-k8s-diff-port-505851/apiserver.crt.f604c011: {Name:mk8c81c5a3b955e4d57458a05a01e6351ea6334a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:03:25.711548  477441 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/default-k8s-diff-port-505851/apiserver.key.f604c011 ...
	I1013 22:03:25.711575  477441 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/default-k8s-diff-port-505851/apiserver.key.f604c011: {Name:mk888ff623dbb01a1319c71bbe1b19b0e7c04b39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:03:25.711704  477441 certs.go:382] copying /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/default-k8s-diff-port-505851/apiserver.crt.f604c011 -> /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/default-k8s-diff-port-505851/apiserver.crt
	I1013 22:03:25.711829  477441 certs.go:386] copying /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/default-k8s-diff-port-505851/apiserver.key.f604c011 -> /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/default-k8s-diff-port-505851/apiserver.key
	I1013 22:03:25.711899  477441 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/default-k8s-diff-port-505851/proxy-client.key
	I1013 22:03:25.711917  477441 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/default-k8s-diff-port-505851/proxy-client.crt with IP's: []
	I1013 22:03:23.021691  476377 out.go:252]   - Generating certificates and keys ...
	I1013 22:03:23.021806  476377 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1013 22:03:23.021888  476377 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1013 22:03:23.128199  476377 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1013 22:03:23.470210  476377 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1013 22:03:23.535933  476377 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1013 22:03:23.753947  476377 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1013 22:03:24.050956  476377 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1013 22:03:24.051129  476377 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [embed-certs-521669 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1013 22:03:24.294828  476377 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1013 22:03:24.295469  476377 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-521669 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1013 22:03:25.075632  476377 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1013 22:03:25.674527  476377 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1013 22:03:25.806145  476377 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1013 22:03:25.806239  476377 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1013 22:03:26.002440  476377 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1013 22:03:26.537264  476377 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1013 22:03:26.939341  476377 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1013 22:03:27.361431  476377 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1013 22:03:27.451189  476377 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1013 22:03:27.452773  476377 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1013 22:03:27.457811  476377 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1013 22:03:25.850356  477441 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/default-k8s-diff-port-505851/proxy-client.crt ...
	I1013 22:03:25.850390  477441 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/default-k8s-diff-port-505851/proxy-client.crt: {Name:mk1ed1ee8ae08b5e560918e0c409cb75a0b6ac25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:03:25.850569  477441 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/default-k8s-diff-port-505851/proxy-client.key ...
	I1013 22:03:25.850584  477441 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/default-k8s-diff-port-505851/proxy-client.key: {Name:mkbc909515a9fca03e924b52ead92cf32f804368 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:03:25.850773  477441 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/230929.pem (1338 bytes)
	W1013 22:03:25.850829  477441 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-226873/.minikube/certs/230929_empty.pem, impossibly tiny 0 bytes
	I1013 22:03:25.850841  477441 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca-key.pem (1675 bytes)
	I1013 22:03:25.850867  477441 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem (1078 bytes)
	I1013 22:03:25.850886  477441 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/cert.pem (1123 bytes)
	I1013 22:03:25.850904  477441 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/key.pem (1679 bytes)
	I1013 22:03:25.850946  477441 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/files/etc/ssl/certs/2309292.pem (1708 bytes)
	I1013 22:03:25.851545  477441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1013 22:03:25.870750  477441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1013 22:03:25.888829  477441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1013 22:03:25.906847  477441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1013 22:03:25.925047  477441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/default-k8s-diff-port-505851/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1013 22:03:25.944035  477441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/default-k8s-diff-port-505851/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1013 22:03:25.963350  477441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/default-k8s-diff-port-505851/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1013 22:03:25.984363  477441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/default-k8s-diff-port-505851/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1013 22:03:26.003013  477441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1013 22:03:26.022970  477441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/certs/230929.pem --> /usr/share/ca-certificates/230929.pem (1338 bytes)
	I1013 22:03:26.041790  477441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/files/etc/ssl/certs/2309292.pem --> /usr/share/ca-certificates/2309292.pem (1708 bytes)
	I1013 22:03:26.060267  477441 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1013 22:03:26.073583  477441 ssh_runner.go:195] Run: openssl version
	I1013 22:03:26.080240  477441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1013 22:03:26.089182  477441 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:03:26.093229  477441 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 13 21:18 /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:03:26.093289  477441 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:03:26.130236  477441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1013 22:03:26.139667  477441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/230929.pem && ln -fs /usr/share/ca-certificates/230929.pem /etc/ssl/certs/230929.pem"
	I1013 22:03:26.148652  477441 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/230929.pem
	I1013 22:03:26.152721  477441 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 13 21:24 /usr/share/ca-certificates/230929.pem
	I1013 22:03:26.152790  477441 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/230929.pem
	I1013 22:03:26.187015  477441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/230929.pem /etc/ssl/certs/51391683.0"
	I1013 22:03:26.196692  477441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2309292.pem && ln -fs /usr/share/ca-certificates/2309292.pem /etc/ssl/certs/2309292.pem"
	I1013 22:03:26.205899  477441 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2309292.pem
	I1013 22:03:26.209838  477441 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 13 21:24 /usr/share/ca-certificates/2309292.pem
	I1013 22:03:26.209907  477441 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2309292.pem
	I1013 22:03:26.244864  477441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2309292.pem /etc/ssl/certs/3ec20f2e.0"
	I1013 22:03:26.254442  477441 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1013 22:03:26.258129  477441 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1013 22:03:26.258182  477441 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-505851 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-505851 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 22:03:26.258267  477441 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 22:03:26.258329  477441 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 22:03:26.287967  477441 cri.go:89] found id: ""
	I1013 22:03:26.288063  477441 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1013 22:03:26.297038  477441 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1013 22:03:26.305359  477441 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1013 22:03:26.305426  477441 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1013 22:03:26.313595  477441 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1013 22:03:26.313614  477441 kubeadm.go:157] found existing configuration files:
	
	I1013 22:03:26.313662  477441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1013 22:03:26.321467  477441 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1013 22:03:26.321518  477441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1013 22:03:26.329292  477441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1013 22:03:26.337378  477441 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1013 22:03:26.337432  477441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1013 22:03:26.346111  477441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1013 22:03:26.354602  477441 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1013 22:03:26.354666  477441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1013 22:03:26.362665  477441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1013 22:03:26.370765  477441 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1013 22:03:26.370839  477441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1013 22:03:26.378542  477441 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1013 22:03:26.442708  477441 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1013 22:03:26.510211  477441 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1013 22:03:27.459306  476377 out.go:252]   - Booting up control plane ...
	I1013 22:03:27.459431  476377 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1013 22:03:27.459518  476377 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1013 22:03:27.460240  476377 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1013 22:03:27.476611  476377 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1013 22:03:27.476757  476377 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1013 22:03:27.484756  476377 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1013 22:03:27.484893  476377 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1013 22:03:27.485012  476377 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1013 22:03:27.585540  476377 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1013 22:03:27.585678  476377 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1013 22:03:28.586437  476377 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.000949022s
	I1013 22:03:28.590746  476377 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1013 22:03:28.590881  476377 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1013 22:03:28.591027  476377 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1013 22:03:28.591108  476377 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1013 22:03:29.596702  476377 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.005885526s
	I1013 22:03:30.847000  476377 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.256279312s
	I1013 22:03:32.092893  476377 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 3.50208585s
	I1013 22:03:32.106657  476377 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1013 22:03:32.120734  476377 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1013 22:03:32.133072  476377 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1013 22:03:32.133366  476377 kubeadm.go:318] [mark-control-plane] Marking the node embed-certs-521669 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1013 22:03:32.143828  476377 kubeadm.go:318] [bootstrap-token] Using token: iu6qpi.vhxdg8i706f1jc7o
	I1013 22:03:32.145955  476377 out.go:252]   - Configuring RBAC rules ...
	I1013 22:03:32.146134  476377 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1013 22:03:32.151621  476377 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1013 22:03:32.162931  476377 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1013 22:03:32.168501  476377 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1013 22:03:32.171566  476377 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1013 22:03:32.175869  476377 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1013 22:03:32.499375  476377 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1013 22:03:32.922637  476377 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1013 22:03:33.499325  476377 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1013 22:03:33.500459  476377 kubeadm.go:318] 
	I1013 22:03:33.500554  476377 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1013 22:03:33.500592  476377 kubeadm.go:318] 
	I1013 22:03:33.500714  476377 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1013 22:03:33.500724  476377 kubeadm.go:318] 
	I1013 22:03:33.500758  476377 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1013 22:03:33.500864  476377 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1013 22:03:33.500972  476377 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1013 22:03:33.501033  476377 kubeadm.go:318] 
	I1013 22:03:33.501104  476377 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1013 22:03:33.501111  476377 kubeadm.go:318] 
	I1013 22:03:33.501165  476377 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1013 22:03:33.501174  476377 kubeadm.go:318] 
	I1013 22:03:33.501236  476377 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1013 22:03:33.501321  476377 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1013 22:03:33.501407  476377 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1013 22:03:33.501415  476377 kubeadm.go:318] 
	I1013 22:03:33.501518  476377 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1013 22:03:33.501611  476377 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1013 22:03:33.501619  476377 kubeadm.go:318] 
	I1013 22:03:33.501719  476377 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token iu6qpi.vhxdg8i706f1jc7o \
	I1013 22:03:33.501843  476377 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:00bf5d6d0f4ef7bd9334b23e90d8fd2b7e452995fe6a96d4fd7aebd6540a9956 \
	I1013 22:03:33.501879  476377 kubeadm.go:318] 	--control-plane 
	I1013 22:03:33.501886  476377 kubeadm.go:318] 
	I1013 22:03:33.502063  476377 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1013 22:03:33.502074  476377 kubeadm.go:318] 
	I1013 22:03:33.502174  476377 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token iu6qpi.vhxdg8i706f1jc7o \
	I1013 22:03:33.502333  476377 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:00bf5d6d0f4ef7bd9334b23e90d8fd2b7e452995fe6a96d4fd7aebd6540a9956 
	I1013 22:03:33.505164  476377 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1013 22:03:33.505283  476377 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1013 22:03:33.505303  476377 cni.go:84] Creating CNI manager for ""
	I1013 22:03:33.505313  476377 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 22:03:33.506913  476377 out.go:179] * Configuring CNI (Container Networking Interface) ...
	
	
	==> CRI-O <==
	Oct 13 22:02:52 no-preload-080337 crio[561]: time="2025-10-13T22:02:52.678294559Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 13 22:02:52 no-preload-080337 crio[561]: time="2025-10-13T22:02:52.681803949Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 13 22:02:52 no-preload-080337 crio[561]: time="2025-10-13T22:02:52.681830524Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 13 22:03:06 no-preload-080337 crio[561]: time="2025-10-13T22:03:06.913637014Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=944aaed8-6d5c-44b5-8b9a-608b814dec21 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:03:06 no-preload-080337 crio[561]: time="2025-10-13T22:03:06.916515915Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=fd79e0e1-c693-4b6a-87ee-9473bb630f90 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:03:06 no-preload-080337 crio[561]: time="2025-10-13T22:03:06.91938671Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q2g2s/dashboard-metrics-scraper" id=2ff58134-6fe7-470a-9f9d-325dcaa5563d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:03:06 no-preload-080337 crio[561]: time="2025-10-13T22:03:06.921565134Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:03:06 no-preload-080337 crio[561]: time="2025-10-13T22:03:06.927776443Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:03:06 no-preload-080337 crio[561]: time="2025-10-13T22:03:06.928336814Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:03:06 no-preload-080337 crio[561]: time="2025-10-13T22:03:06.961225382Z" level=info msg="Created container f7a7540b72189df38075c56febc2382f76a3f78677b19a8e85ae274d5d30b6ef: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q2g2s/dashboard-metrics-scraper" id=2ff58134-6fe7-470a-9f9d-325dcaa5563d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:03:06 no-preload-080337 crio[561]: time="2025-10-13T22:03:06.96192305Z" level=info msg="Starting container: f7a7540b72189df38075c56febc2382f76a3f78677b19a8e85ae274d5d30b6ef" id=45c697ef-dd39-4661-8308-4c69c2242ed5 name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 22:03:06 no-preload-080337 crio[561]: time="2025-10-13T22:03:06.964042994Z" level=info msg="Started container" PID=1755 containerID=f7a7540b72189df38075c56febc2382f76a3f78677b19a8e85ae274d5d30b6ef description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q2g2s/dashboard-metrics-scraper id=45c697ef-dd39-4661-8308-4c69c2242ed5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=38c676dc0edb3823e7b9790dfc6b4a2e25f729f5df8cd26bd2cb8b5e68c936f3
	Oct 13 22:03:07 no-preload-080337 crio[561]: time="2025-10-13T22:03:07.010640903Z" level=info msg="Removing container: 92d533ba7a51e6d43482acb5451c0b339d11c086bfdfdc9f7dbfcbefb4f5002a" id=1527f66b-3018-4b55-86e3-aeb65236effa name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 13 22:03:07 no-preload-080337 crio[561]: time="2025-10-13T22:03:07.022261741Z" level=info msg="Removed container 92d533ba7a51e6d43482acb5451c0b339d11c086bfdfdc9f7dbfcbefb4f5002a: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q2g2s/dashboard-metrics-scraper" id=1527f66b-3018-4b55-86e3-aeb65236effa name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 13 22:03:13 no-preload-080337 crio[561]: time="2025-10-13T22:03:13.030507805Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=1626cc39-59ab-4e6b-82a6-0560a420ae17 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:03:13 no-preload-080337 crio[561]: time="2025-10-13T22:03:13.031571331Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=af0cb6c0-28df-48b8-9149-2e14272b1319 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:03:13 no-preload-080337 crio[561]: time="2025-10-13T22:03:13.032665738Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=b4588445-2713-467e-a640-b17e34aec21e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:03:13 no-preload-080337 crio[561]: time="2025-10-13T22:03:13.032946808Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:03:13 no-preload-080337 crio[561]: time="2025-10-13T22:03:13.040661874Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:03:13 no-preload-080337 crio[561]: time="2025-10-13T22:03:13.040900002Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/af77e7e8553fbbca3404061d81d481a625950d22700101f2d2d5524927a4cf66/merged/etc/passwd: no such file or directory"
	Oct 13 22:03:13 no-preload-080337 crio[561]: time="2025-10-13T22:03:13.04093104Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/af77e7e8553fbbca3404061d81d481a625950d22700101f2d2d5524927a4cf66/merged/etc/group: no such file or directory"
	Oct 13 22:03:13 no-preload-080337 crio[561]: time="2025-10-13T22:03:13.04129473Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:03:13 no-preload-080337 crio[561]: time="2025-10-13T22:03:13.0685341Z" level=info msg="Created container a2800e4594ddbdd381e3a3e55fb92350f657478bba273f9ed6e919eaf04046e4: kube-system/storage-provisioner/storage-provisioner" id=b4588445-2713-467e-a640-b17e34aec21e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:03:13 no-preload-080337 crio[561]: time="2025-10-13T22:03:13.069234087Z" level=info msg="Starting container: a2800e4594ddbdd381e3a3e55fb92350f657478bba273f9ed6e919eaf04046e4" id=946a128b-acb7-458c-87dd-62b0b7ba241a name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 22:03:13 no-preload-080337 crio[561]: time="2025-10-13T22:03:13.071379583Z" level=info msg="Started container" PID=1769 containerID=a2800e4594ddbdd381e3a3e55fb92350f657478bba273f9ed6e919eaf04046e4 description=kube-system/storage-provisioner/storage-provisioner id=946a128b-acb7-458c-87dd-62b0b7ba241a name=/runtime.v1.RuntimeService/StartContainer sandboxID=b157b262815590a2c71c6209da58bbf7a774a03d3441428685132ea518fb87e1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	a2800e4594ddb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           22 seconds ago      Running             storage-provisioner         1                   b157b26281559       storage-provisioner                          kube-system
	f7a7540b72189       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           28 seconds ago      Exited              dashboard-metrics-scraper   2                   38c676dc0edb3       dashboard-metrics-scraper-6ffb444bf9-q2g2s   kubernetes-dashboard
	ff734f532ee90       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   44 seconds ago      Running             kubernetes-dashboard        0                   c84c828594c04       kubernetes-dashboard-855c9754f9-mkvmc        kubernetes-dashboard
	0a3d791b517ff       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           53 seconds ago      Running             coredns                     0                   30f2676492f91       coredns-66bc5c9577-n6t7s                     kube-system
	c11d7ea10ff07       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           53 seconds ago      Running             kindnet-cni                 0                   26931721ce632       kindnet-74766                                kube-system
	ffff1cd868444       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           53 seconds ago      Running             busybox                     1                   c65145bbe6d8e       busybox                                      default
	ca17462b8cc0e       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           53 seconds ago      Running             kube-proxy                  0                   5ece505d36e59       kube-proxy-2scrx                             kube-system
	171aa5a37278a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           53 seconds ago      Exited              storage-provisioner         0                   b157b26281559       storage-provisioner                          kube-system
	148f0bcacf55a       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           56 seconds ago      Running             etcd                        0                   3bd670e335491       etcd-no-preload-080337                       kube-system
	db978d7166395       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           56 seconds ago      Running             kube-apiserver              0                   76a4f9e5e9eb8       kube-apiserver-no-preload-080337             kube-system
	3f85644ea5a0b       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           56 seconds ago      Running             kube-controller-manager     0                   7ea6dbd197034       kube-controller-manager-no-preload-080337    kube-system
	09313475387f6       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           56 seconds ago      Running             kube-scheduler              0                   abf4dddba602b       kube-scheduler-no-preload-080337             kube-system
	
	
	==> coredns [0a3d791b517ffdd9da09560885e05b173435fc2617cdb09b7a07530db6434db5] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:44289 - 194 "HINFO IN 8769929789709925291.6681308039238444373. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.065629172s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-080337
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-080337
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=18273cc699fc357fbf4f93654efe4966698a9f22
	                    minikube.k8s.io/name=no-preload-080337
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_13T22_01_43_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 13 Oct 2025 22:01:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-080337
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 13 Oct 2025 22:03:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 13 Oct 2025 22:03:12 +0000   Mon, 13 Oct 2025 22:01:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 13 Oct 2025 22:03:12 +0000   Mon, 13 Oct 2025 22:01:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 13 Oct 2025 22:03:12 +0000   Mon, 13 Oct 2025 22:01:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 13 Oct 2025 22:03:12 +0000   Mon, 13 Oct 2025 22:02:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    no-preload-080337
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863444Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863444Ki
	  pods:               110
	System Info:
	  Machine ID:                 66f4c9907daa2bafbf78246f68ed04ba
	  System UUID:                b626e944-ef41-4bbd-9e16-cce1552f60c7
	  Boot ID:                    981b04c4-08c1-4321-af44-0fa889f5f1d8
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 coredns-66bc5c9577-n6t7s                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     107s
	  kube-system                 etcd-no-preload-080337                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         113s
	  kube-system                 kindnet-74766                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      107s
	  kube-system                 kube-apiserver-no-preload-080337              250m (3%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-controller-manager-no-preload-080337     200m (2%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-proxy-2scrx                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-scheduler-no-preload-080337              100m (1%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-q2g2s    0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-mkvmc         0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 106s               kube-proxy       
	  Normal  Starting                 53s                kube-proxy       
	  Normal  NodeHasSufficientMemory  113s               kubelet          Node no-preload-080337 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    113s               kubelet          Node no-preload-080337 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     113s               kubelet          Node no-preload-080337 status is now: NodeHasSufficientPID
	  Normal  Starting                 113s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           108s               node-controller  Node no-preload-080337 event: Registered Node no-preload-080337 in Controller
	  Normal  NodeReady                94s                kubelet          Node no-preload-080337 status is now: NodeReady
	  Normal  Starting                 57s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  56s (x8 over 57s)  kubelet          Node no-preload-080337 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    56s (x8 over 57s)  kubelet          Node no-preload-080337 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     56s (x8 over 57s)  kubelet          Node no-preload-080337 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           51s                node-controller  Node no-preload-080337 event: Registered Node no-preload-080337 in Controller
	
	
	==> dmesg <==
	[  +0.099627] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.027042] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.612816] kauditd_printk_skb: 47 callbacks suppressed
	[Oct13 21:21] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +1.026013] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +1.023870] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +1.023931] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +1.023844] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +1.023883] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +2.047734] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +4.031606] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +8.511203] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[ +16.382178] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[Oct13 21:22] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	
	
	==> etcd [148f0bcacf55a43101a10f115e851d44747ab0b0f8fa14a67c8e9715dc66844d] <==
	{"level":"warn","ts":"2025-10-13T22:02:40.691118Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:02:40.698026Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:02:40.705324Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:02:40.712691Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:02:40.719815Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:02:40.726461Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:02:40.732521Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:02:40.746261Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:02:40.759320Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:02:40.766123Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:02:40.772426Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:02:40.778615Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:02:40.786138Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:02:40.793377Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:02:40.800317Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:02:40.806468Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:02:40.812647Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:02:40.818845Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:02:40.830186Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:02:40.834967Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:02:40.845084Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:02:40.851271Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52898","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-13T22:03:14.275624Z","caller":"traceutil/trace.go:172","msg":"trace[1620239583] transaction","detail":"{read_only:false; response_revision:622; number_of_response:1; }","duration":"234.058297ms","start":"2025-10-13T22:03:14.041544Z","end":"2025-10-13T22:03:14.275602Z","steps":["trace[1620239583] 'process raft request'  (duration: 233.892225ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-13T22:03:14.562435Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"155.883308ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-n6t7s\" limit:1 ","response":"range_response_count:1 size:5933"}
	{"level":"info","ts":"2025-10-13T22:03:14.562516Z","caller":"traceutil/trace.go:172","msg":"trace[1526227819] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-n6t7s; range_end:; response_count:1; response_revision:622; }","duration":"156.008322ms","start":"2025-10-13T22:03:14.406491Z","end":"2025-10-13T22:03:14.562500Z","steps":["trace[1526227819] 'range keys from in-memory index tree'  (duration: 155.711594ms)"],"step_count":1}
	
	
	==> kernel <==
	 22:03:35 up  1:46,  0 user,  load average: 4.15, 3.47, 5.88
	Linux no-preload-080337 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c11d7ea10ff07c5ab8ae8feca92e0b0aa357520977cf80360fa01049e5b32b5f] <==
	I1013 22:02:42.456548       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1013 22:02:42.550101       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1013 22:02:42.550273       1 main.go:148] setting mtu 1500 for CNI 
	I1013 22:02:42.550290       1 main.go:178] kindnetd IP family: "ipv4"
	I1013 22:02:42.550315       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-13T22:02:42Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1013 22:02:42.659082       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1013 22:02:42.750157       1 controller.go:381] "Waiting for informer caches to sync"
	I1013 22:02:42.750274       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1013 22:02:42.750693       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1013 22:02:42.955346       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1013 22:02:42.955390       1 metrics.go:72] Registering metrics
	I1013 22:02:42.956361       1 controller.go:711] "Syncing nftables rules"
	I1013 22:02:52.659115       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1013 22:02:52.659172       1 main.go:301] handling current node
	I1013 22:03:02.667057       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1013 22:03:02.667097       1 main.go:301] handling current node
	I1013 22:03:12.659061       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1013 22:03:12.659093       1 main.go:301] handling current node
	I1013 22:03:22.664070       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1013 22:03:22.664115       1 main.go:301] handling current node
	I1013 22:03:32.668081       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1013 22:03:32.668115       1 main.go:301] handling current node
	
	
	==> kube-apiserver [db978d7166395383320a2b2c9c28bf365b3b1253da4d608cc691cb890c27b32f] <==
	I1013 22:02:41.382955       1 autoregister_controller.go:144] Starting autoregister controller
	I1013 22:02:41.382963       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1013 22:02:41.382971       1 cache.go:39] Caches are synced for autoregister controller
	I1013 22:02:41.381173       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1013 22:02:41.380964       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1013 22:02:41.380983       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1013 22:02:41.381128       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1013 22:02:41.383447       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1013 22:02:41.388042       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1013 22:02:41.410905       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1013 22:02:41.411186       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1013 22:02:41.422902       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1013 22:02:41.422935       1 policy_source.go:240] refreshing policies
	I1013 22:02:41.463814       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1013 22:02:41.681166       1 controller.go:667] quota admission added evaluator for: namespaces
	I1013 22:02:41.708476       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1013 22:02:41.726785       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1013 22:02:41.737523       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1013 22:02:41.743479       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1013 22:02:41.775635       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.241.189"}
	I1013 22:02:41.785157       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.97.202.77"}
	I1013 22:02:42.286131       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1013 22:02:45.024493       1 controller.go:667] quota admission added evaluator for: endpoints
	I1013 22:02:45.174302       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1013 22:02:45.272828       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [3f85644ea5a0b267c7fc78009aa5bfd8d8247edbf9e2e04243d0da00d40977e5] <==
	I1013 22:02:44.702052       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1013 22:02:44.704356       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1013 22:02:44.706892       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1013 22:02:44.708215       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1013 22:02:44.711363       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1013 22:02:44.720373       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1013 22:02:44.720421       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1013 22:02:44.720453       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1013 22:02:44.720473       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1013 22:02:44.720498       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1013 22:02:44.720543       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1013 22:02:44.720759       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1013 22:02:44.720899       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 22:02:44.720916       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1013 22:02:44.720921       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1013 22:02:44.721070       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1013 22:02:44.721103       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1013 22:02:44.721692       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1013 22:02:44.721724       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1013 22:02:44.722922       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1013 22:02:44.722950       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1013 22:02:44.724106       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1013 22:02:44.726358       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1013 22:02:44.741458       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1013 22:02:44.744776       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [ca17462b8cc0e8271f720f326aced92a21cf66c7a613241186fd9386088f8ac4] <==
	I1013 22:02:42.313419       1 server_linux.go:53] "Using iptables proxy"
	I1013 22:02:42.369257       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1013 22:02:42.469690       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1013 22:02:42.469735       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1013 22:02:42.469844       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1013 22:02:42.491377       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1013 22:02:42.491434       1 server_linux.go:132] "Using iptables Proxier"
	I1013 22:02:42.496786       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1013 22:02:42.497272       1 server.go:527] "Version info" version="v1.34.1"
	I1013 22:02:42.497304       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 22:02:42.498569       1 config.go:200] "Starting service config controller"
	I1013 22:02:42.498597       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1013 22:02:42.498683       1 config.go:309] "Starting node config controller"
	I1013 22:02:42.498690       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1013 22:02:42.498835       1 config.go:106] "Starting endpoint slice config controller"
	I1013 22:02:42.498850       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1013 22:02:42.499264       1 config.go:403] "Starting serviceCIDR config controller"
	I1013 22:02:42.499460       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1013 22:02:42.599169       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1013 22:02:42.599206       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1013 22:02:42.599206       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1013 22:02:42.599741       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [09313475387f6d9193c4369e317fc1d49a163fc8159f82148fea73cd3e610424] <==
	I1013 22:02:39.920828       1 serving.go:386] Generated self-signed cert in-memory
	I1013 22:02:41.390443       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1013 22:02:41.390476       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 22:02:41.396805       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1013 22:02:41.396839       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1013 22:02:41.396922       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 22:02:41.396923       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1013 22:02:41.396944       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 22:02:41.396953       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1013 22:02:41.397506       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1013 22:02:41.397594       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1013 22:02:41.497459       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1013 22:02:41.497464       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 22:02:41.497473       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	
	
	==> kubelet <==
	Oct 13 22:02:45 no-preload-080337 kubelet[710]: I1013 22:02:45.344056     710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwhbh\" (UniqueName: \"kubernetes.io/projected/8b62cb5c-c068-444e-a216-87c6c73d107b-kube-api-access-vwhbh\") pod \"kubernetes-dashboard-855c9754f9-mkvmc\" (UID: \"8b62cb5c-c068-444e-a216-87c6c73d107b\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-mkvmc"
	Oct 13 22:02:45 no-preload-080337 kubelet[710]: I1013 22:02:45.344153     710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/8b62cb5c-c068-444e-a216-87c6c73d107b-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-mkvmc\" (UID: \"8b62cb5c-c068-444e-a216-87c6c73d107b\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-mkvmc"
	Oct 13 22:02:46 no-preload-080337 kubelet[710]: I1013 22:02:46.501048     710 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 13 22:02:47 no-preload-080337 kubelet[710]: I1013 22:02:47.955749     710 scope.go:117] "RemoveContainer" containerID="3eddbe430db5fa262b81161bb8d5b10238dd1e0dacfdab840055d5c0a3f08255"
	Oct 13 22:02:48 no-preload-080337 kubelet[710]: I1013 22:02:48.960388     710 scope.go:117] "RemoveContainer" containerID="3eddbe430db5fa262b81161bb8d5b10238dd1e0dacfdab840055d5c0a3f08255"
	Oct 13 22:02:48 no-preload-080337 kubelet[710]: I1013 22:02:48.960592     710 scope.go:117] "RemoveContainer" containerID="92d533ba7a51e6d43482acb5451c0b339d11c086bfdfdc9f7dbfcbefb4f5002a"
	Oct 13 22:02:48 no-preload-080337 kubelet[710]: E1013 22:02:48.960791     710 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-q2g2s_kubernetes-dashboard(69d7efac-3f98-4e70-9521-1a59cbf3ce29)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q2g2s" podUID="69d7efac-3f98-4e70-9521-1a59cbf3ce29"
	Oct 13 22:02:49 no-preload-080337 kubelet[710]: I1013 22:02:49.964530     710 scope.go:117] "RemoveContainer" containerID="92d533ba7a51e6d43482acb5451c0b339d11c086bfdfdc9f7dbfcbefb4f5002a"
	Oct 13 22:02:49 no-preload-080337 kubelet[710]: E1013 22:02:49.964724     710 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-q2g2s_kubernetes-dashboard(69d7efac-3f98-4e70-9521-1a59cbf3ce29)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q2g2s" podUID="69d7efac-3f98-4e70-9521-1a59cbf3ce29"
	Oct 13 22:02:51 no-preload-080337 kubelet[710]: I1013 22:02:51.980602     710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-mkvmc" podStartSLOduration=1.32029568 podStartE2EDuration="6.980579985s" podCreationTimestamp="2025-10-13 22:02:45 +0000 UTC" firstStartedPulling="2025-10-13 22:02:45.57297582 +0000 UTC m=+6.747195581" lastFinishedPulling="2025-10-13 22:02:51.233260136 +0000 UTC m=+12.407479886" observedRunningTime="2025-10-13 22:02:51.98035587 +0000 UTC m=+13.154575639" watchObservedRunningTime="2025-10-13 22:02:51.980579985 +0000 UTC m=+13.154799754"
	Oct 13 22:02:53 no-preload-080337 kubelet[710]: I1013 22:02:53.101101     710 scope.go:117] "RemoveContainer" containerID="92d533ba7a51e6d43482acb5451c0b339d11c086bfdfdc9f7dbfcbefb4f5002a"
	Oct 13 22:02:53 no-preload-080337 kubelet[710]: E1013 22:02:53.101284     710 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-q2g2s_kubernetes-dashboard(69d7efac-3f98-4e70-9521-1a59cbf3ce29)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q2g2s" podUID="69d7efac-3f98-4e70-9521-1a59cbf3ce29"
	Oct 13 22:03:06 no-preload-080337 kubelet[710]: I1013 22:03:06.913158     710 scope.go:117] "RemoveContainer" containerID="92d533ba7a51e6d43482acb5451c0b339d11c086bfdfdc9f7dbfcbefb4f5002a"
	Oct 13 22:03:07 no-preload-080337 kubelet[710]: I1013 22:03:07.009363     710 scope.go:117] "RemoveContainer" containerID="92d533ba7a51e6d43482acb5451c0b339d11c086bfdfdc9f7dbfcbefb4f5002a"
	Oct 13 22:03:07 no-preload-080337 kubelet[710]: I1013 22:03:07.009637     710 scope.go:117] "RemoveContainer" containerID="f7a7540b72189df38075c56febc2382f76a3f78677b19a8e85ae274d5d30b6ef"
	Oct 13 22:03:07 no-preload-080337 kubelet[710]: E1013 22:03:07.010062     710 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-q2g2s_kubernetes-dashboard(69d7efac-3f98-4e70-9521-1a59cbf3ce29)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q2g2s" podUID="69d7efac-3f98-4e70-9521-1a59cbf3ce29"
	Oct 13 22:03:13 no-preload-080337 kubelet[710]: I1013 22:03:13.030144     710 scope.go:117] "RemoveContainer" containerID="171aa5a37278a899b44963bc44d42ebd79c2ac51b6a51f575a8e1e30845ec531"
	Oct 13 22:03:13 no-preload-080337 kubelet[710]: I1013 22:03:13.102201     710 scope.go:117] "RemoveContainer" containerID="f7a7540b72189df38075c56febc2382f76a3f78677b19a8e85ae274d5d30b6ef"
	Oct 13 22:03:13 no-preload-080337 kubelet[710]: E1013 22:03:13.102419     710 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-q2g2s_kubernetes-dashboard(69d7efac-3f98-4e70-9521-1a59cbf3ce29)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q2g2s" podUID="69d7efac-3f98-4e70-9521-1a59cbf3ce29"
	Oct 13 22:03:23 no-preload-080337 kubelet[710]: I1013 22:03:23.912104     710 scope.go:117] "RemoveContainer" containerID="f7a7540b72189df38075c56febc2382f76a3f78677b19a8e85ae274d5d30b6ef"
	Oct 13 22:03:23 no-preload-080337 kubelet[710]: E1013 22:03:23.912325     710 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-q2g2s_kubernetes-dashboard(69d7efac-3f98-4e70-9521-1a59cbf3ce29)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q2g2s" podUID="69d7efac-3f98-4e70-9521-1a59cbf3ce29"
	Oct 13 22:03:30 no-preload-080337 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 13 22:03:30 no-preload-080337 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 13 22:03:30 no-preload-080337 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 13 22:03:30 no-preload-080337 systemd[1]: kubelet.service: Consumed 1.663s CPU time.
	
	
	==> kubernetes-dashboard [ff734f532ee90c978ae4ce5cfb25e9648dbfe2eedcb5f833476bc6ebc32b57e8] <==
	2025/10/13 22:02:51 Starting overwatch
	2025/10/13 22:02:51 Using namespace: kubernetes-dashboard
	2025/10/13 22:02:51 Using in-cluster config to connect to apiserver
	2025/10/13 22:02:51 Using secret token for csrf signing
	2025/10/13 22:02:51 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/13 22:02:51 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/13 22:02:51 Successful initial request to the apiserver, version: v1.34.1
	2025/10/13 22:02:51 Generating JWE encryption key
	2025/10/13 22:02:51 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/13 22:02:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/13 22:02:51 Initializing JWE encryption key from synchronized object
	2025/10/13 22:02:51 Creating in-cluster Sidecar client
	2025/10/13 22:02:51 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/13 22:02:51 Serving insecurely on HTTP port: 9090
	2025/10/13 22:03:21 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [171aa5a37278a899b44963bc44d42ebd79c2ac51b6a51f575a8e1e30845ec531] <==
	I1013 22:02:42.276038       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1013 22:03:12.278428       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [a2800e4594ddbdd381e3a3e55fb92350f657478bba273f9ed6e919eaf04046e4] <==
	I1013 22:03:13.085165       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1013 22:03:13.093750       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1013 22:03:13.093815       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1013 22:03:13.096065       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:03:16.551082       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:03:20.811932       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:03:24.410527       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:03:27.465424       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:03:30.489285       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:03:30.495424       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1013 22:03:30.495588       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1013 22:03:30.496231       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-080337_148b1ced-0af5-4ac8-b206-358d4e269ffa!
	I1013 22:03:30.496705       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f7f52034-0e22-43b7-ac83-32c79d19cae9", APIVersion:"v1", ResourceVersion:"632", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-080337_148b1ced-0af5-4ac8-b206-358d4e269ffa became leader
	W1013 22:03:30.499097       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:03:30.504638       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1013 22:03:30.596550       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-080337_148b1ced-0af5-4ac8-b206-358d4e269ffa!
	W1013 22:03:32.508409       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:03:32.518623       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:03:34.523073       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:03:34.528505       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-080337 -n no-preload-080337
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-080337 -n no-preload-080337: exit status 2 (360.965014ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-080337 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (6.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.68s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-505851 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-505851 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (285.754723ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:04:06Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-505851 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-505851 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-505851 describe deploy/metrics-server -n kube-system: exit status 1 (98.001356ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-505851 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-505851
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-505851:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "25632f4a587b6c4ae3b8b01bcf7b68f7c63c7aa246539e4d409f57af494c93ea",
	        "Created": "2025-10-13T22:03:21.32648793Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 479265,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-13T22:03:21.366280922Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:496f866fde942b6f85dc3d23428f27ed11a5cd69b522133c43b8dc97e7575c9e",
	        "ResolvConfPath": "/var/lib/docker/containers/25632f4a587b6c4ae3b8b01bcf7b68f7c63c7aa246539e4d409f57af494c93ea/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/25632f4a587b6c4ae3b8b01bcf7b68f7c63c7aa246539e4d409f57af494c93ea/hostname",
	        "HostsPath": "/var/lib/docker/containers/25632f4a587b6c4ae3b8b01bcf7b68f7c63c7aa246539e4d409f57af494c93ea/hosts",
	        "LogPath": "/var/lib/docker/containers/25632f4a587b6c4ae3b8b01bcf7b68f7c63c7aa246539e4d409f57af494c93ea/25632f4a587b6c4ae3b8b01bcf7b68f7c63c7aa246539e4d409f57af494c93ea-json.log",
	        "Name": "/default-k8s-diff-port-505851",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-505851:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-505851",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "25632f4a587b6c4ae3b8b01bcf7b68f7c63c7aa246539e4d409f57af494c93ea",
	                "LowerDir": "/var/lib/docker/overlay2/6b2a262bb341241a8ef07d2e0e2f1e5a0bf23a58ce55acefa3a22c4f42e20d7b-init/diff:/var/lib/docker/overlay2/d6236b573fee274727d414fe7dfb0718c2f1a4b8ebed995b4196b3231a8d31a4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6b2a262bb341241a8ef07d2e0e2f1e5a0bf23a58ce55acefa3a22c4f42e20d7b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6b2a262bb341241a8ef07d2e0e2f1e5a0bf23a58ce55acefa3a22c4f42e20d7b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6b2a262bb341241a8ef07d2e0e2f1e5a0bf23a58ce55acefa3a22c4f42e20d7b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-505851",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-505851/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-505851",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-505851",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-505851",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d7673e020f500b742c0944abe49c705debeb85d3e3d3a237e87e4c7aa07698e5",
	            "SandboxKey": "/var/run/docker/netns/d7673e020f50",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33078"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33079"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33082"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33080"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33081"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-505851": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "6a:70:26:c7:3e:e0",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "bd127c16ad9414037a41fda45a58cf82e4113c81cfa569a1b9f2b3db8c366a7a",
	                    "EndpointID": "bab29fe86e1f90b433325300db1989d3daf349b6245305754783528f474e080c",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-505851",
	                        "25632f4a587b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-505851 -n default-k8s-diff-port-505851
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-505851 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-505851 logs -n 25: (1.328765623s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p old-k8s-version-534822 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-534822       │ jenkins │ v1.37.0 │ 13 Oct 25 22:02 UTC │ 13 Oct 25 22:02 UTC │
	│ start   │ -p old-k8s-version-534822 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-534822       │ jenkins │ v1.37.0 │ 13 Oct 25 22:02 UTC │ 13 Oct 25 22:02 UTC │
	│ addons  │ enable metrics-server -p no-preload-080337 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-080337            │ jenkins │ v1.37.0 │ 13 Oct 25 22:02 UTC │                     │
	│ stop    │ -p no-preload-080337 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-080337            │ jenkins │ v1.37.0 │ 13 Oct 25 22:02 UTC │ 13 Oct 25 22:02 UTC │
	│ addons  │ enable dashboard -p no-preload-080337 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-080337            │ jenkins │ v1.37.0 │ 13 Oct 25 22:02 UTC │ 13 Oct 25 22:02 UTC │
	│ start   │ -p no-preload-080337 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-080337            │ jenkins │ v1.37.0 │ 13 Oct 25 22:02 UTC │ 13 Oct 25 22:03 UTC │
	│ image   │ old-k8s-version-534822 image list --format=json                                                                                                                                                                                               │ old-k8s-version-534822       │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │ 13 Oct 25 22:03 UTC │
	│ pause   │ -p old-k8s-version-534822 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-534822       │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │                     │
	│ start   │ -p kubernetes-upgrade-050146 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-050146    │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │                     │
	│ start   │ -p kubernetes-upgrade-050146 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-050146    │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │ 13 Oct 25 22:03 UTC │
	│ delete  │ -p old-k8s-version-534822                                                                                                                                                                                                                     │ old-k8s-version-534822       │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │ 13 Oct 25 22:03 UTC │
	│ delete  │ -p old-k8s-version-534822                                                                                                                                                                                                                     │ old-k8s-version-534822       │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │ 13 Oct 25 22:03 UTC │
	│ start   │ -p embed-certs-521669 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-521669           │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │                     │
	│ delete  │ -p kubernetes-upgrade-050146                                                                                                                                                                                                                  │ kubernetes-upgrade-050146    │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │ 13 Oct 25 22:03 UTC │
	│ delete  │ -p disable-driver-mounts-659143                                                                                                                                                                                                               │ disable-driver-mounts-659143 │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │ 13 Oct 25 22:03 UTC │
	│ start   │ -p default-k8s-diff-port-505851 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-505851 │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │ 13 Oct 25 22:03 UTC │
	│ image   │ no-preload-080337 image list --format=json                                                                                                                                                                                                    │ no-preload-080337            │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │ 13 Oct 25 22:03 UTC │
	│ pause   │ -p no-preload-080337 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-080337            │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │                     │
	│ delete  │ -p no-preload-080337                                                                                                                                                                                                                          │ no-preload-080337            │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │ 13 Oct 25 22:03 UTC │
	│ start   │ -p cert-expiration-894101 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-894101       │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │ 13 Oct 25 22:03 UTC │
	│ delete  │ -p no-preload-080337                                                                                                                                                                                                                          │ no-preload-080337            │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │ 13 Oct 25 22:03 UTC │
	│ start   │ -p newest-cni-843554 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-843554            │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │                     │
	│ delete  │ -p cert-expiration-894101                                                                                                                                                                                                                     │ cert-expiration-894101       │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │ 13 Oct 25 22:03 UTC │
	│ start   │ -p auto-200102 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-200102                  │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-505851 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-505851 │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 22:03:48
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 22:03:48.029761  487583 out.go:360] Setting OutFile to fd 1 ...
	I1013 22:03:48.030023  487583 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:03:48.030035  487583 out.go:374] Setting ErrFile to fd 2...
	I1013 22:03:48.030041  487583 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:03:48.030296  487583 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-226873/.minikube/bin
	I1013 22:03:48.030780  487583 out.go:368] Setting JSON to false
	I1013 22:03:48.031936  487583 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6376,"bootTime":1760386652,"procs":342,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1013 22:03:48.032065  487583 start.go:141] virtualization: kvm guest
	I1013 22:03:48.034430  487583 out.go:179] * [auto-200102] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1013 22:03:48.036358  487583 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 22:03:48.036414  487583 notify.go:220] Checking for updates...
	I1013 22:03:48.038906  487583 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 22:03:48.040504  487583 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-226873/kubeconfig
	I1013 22:03:48.041872  487583 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-226873/.minikube
	I1013 22:03:48.043243  487583 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1013 22:03:48.044845  487583 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 22:03:48.046696  487583 config.go:182] Loaded profile config "default-k8s-diff-port-505851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:03:48.046819  487583 config.go:182] Loaded profile config "embed-certs-521669": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:03:48.046968  487583 config.go:182] Loaded profile config "newest-cni-843554": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:03:48.047110  487583 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 22:03:48.073525  487583 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1013 22:03:48.073625  487583 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 22:03:48.138195  487583 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-13 22:03:48.12659078 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652166656 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1013 22:03:48.138327  487583 docker.go:318] overlay module found
	I1013 22:03:48.140592  487583 out.go:179] * Using the docker driver based on user configuration
	I1013 22:03:48.142124  487583 start.go:305] selected driver: docker
	I1013 22:03:48.142142  487583 start.go:925] validating driver "docker" against <nil>
	I1013 22:03:48.142153  487583 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 22:03:48.142712  487583 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 22:03:48.216084  487583 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-13 22:03:48.198359217 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652166656 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1013 22:03:48.216338  487583 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1013 22:03:48.216566  487583 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 22:03:48.218603  487583 out.go:179] * Using Docker driver with root privileges
	I1013 22:03:48.220161  487583 cni.go:84] Creating CNI manager for ""
	I1013 22:03:48.220255  487583 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 22:03:48.220270  487583 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1013 22:03:48.220345  487583 start.go:349] cluster config:
	{Name:auto-200102 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-200102 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I1013 22:03:48.221846  487583 out.go:179] * Starting "auto-200102" primary control-plane node in "auto-200102" cluster
	I1013 22:03:48.223068  487583 cache.go:123] Beginning downloading kic base image for docker with crio
	I1013 22:03:48.224361  487583 out.go:179] * Pulling base image v0.0.48-1760363564-21724 ...
	I1013 22:03:48.225605  487583 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 22:03:48.225650  487583 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon
	I1013 22:03:48.225657  487583 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-226873/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1013 22:03:48.225688  487583 cache.go:58] Caching tarball of preloaded images
	I1013 22:03:48.225840  487583 preload.go:233] Found /home/jenkins/minikube-integration/21724-226873/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1013 22:03:48.225851  487583 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1013 22:03:48.225978  487583 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/auto-200102/config.json ...
	I1013 22:03:48.226022  487583 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/auto-200102/config.json: {Name:mkf8a6685b530b08c33830ead99deec2c559bb78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:03:48.247357  487583 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon, skipping pull
	I1013 22:03:48.247381  487583 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 exists in daemon, skipping load
	I1013 22:03:48.247397  487583 cache.go:232] Successfully downloaded all kic artifacts
	I1013 22:03:48.247422  487583 start.go:360] acquireMachinesLock for auto-200102: {Name:mkec2895047b3318600813a981c122de09ee3451 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 22:03:48.247518  487583 start.go:364] duration metric: took 80.213µs to acquireMachinesLock for "auto-200102"
	I1013 22:03:48.247542  487583 start.go:93] Provisioning new machine with config: &{Name:auto-200102 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-200102 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1013 22:03:48.247615  487583 start.go:125] createHost starting for "" (driver="docker")
	I1013 22:03:44.989377  484490 cli_runner.go:164] Run: docker container inspect newest-cni-843554 --format={{.State.Running}}
	I1013 22:03:45.009605  484490 cli_runner.go:164] Run: docker container inspect newest-cni-843554 --format={{.State.Status}}
	I1013 22:03:45.032666  484490 cli_runner.go:164] Run: docker exec newest-cni-843554 stat /var/lib/dpkg/alternatives/iptables
	I1013 22:03:45.086247  484490 oci.go:144] the created container "newest-cni-843554" has a running status.
	I1013 22:03:45.086287  484490 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21724-226873/.minikube/machines/newest-cni-843554/id_rsa...
	I1013 22:03:45.645319  484490 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21724-226873/.minikube/machines/newest-cni-843554/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1013 22:03:45.674508  484490 cli_runner.go:164] Run: docker container inspect newest-cni-843554 --format={{.State.Status}}
	I1013 22:03:45.693914  484490 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1013 22:03:45.693946  484490 kic_runner.go:114] Args: [docker exec --privileged newest-cni-843554 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1013 22:03:45.746957  484490 cli_runner.go:164] Run: docker container inspect newest-cni-843554 --format={{.State.Status}}
	I1013 22:03:45.769075  484490 machine.go:93] provisionDockerMachine start ...
	I1013 22:03:45.769199  484490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-843554
	I1013 22:03:45.790048  484490 main.go:141] libmachine: Using SSH client type: native
	I1013 22:03:45.790402  484490 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1013 22:03:45.790425  484490 main.go:141] libmachine: About to run SSH command:
	hostname
	I1013 22:03:45.935251  484490 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-843554
	
	I1013 22:03:45.935281  484490 ubuntu.go:182] provisioning hostname "newest-cni-843554"
	I1013 22:03:45.935351  484490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-843554
	I1013 22:03:45.965899  484490 main.go:141] libmachine: Using SSH client type: native
	I1013 22:03:45.966222  484490 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1013 22:03:45.966246  484490 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-843554 && echo "newest-cni-843554" | sudo tee /etc/hostname
	I1013 22:03:46.121610  484490 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-843554
	
	I1013 22:03:46.121696  484490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-843554
	I1013 22:03:46.140604  484490 main.go:141] libmachine: Using SSH client type: native
	I1013 22:03:46.140967  484490 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1013 22:03:46.141012  484490 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-843554' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-843554/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-843554' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1013 22:03:46.279212  484490 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 22:03:46.279243  484490 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21724-226873/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-226873/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-226873/.minikube}
	I1013 22:03:46.279270  484490 ubuntu.go:190] setting up certificates
	I1013 22:03:46.279283  484490 provision.go:84] configureAuth start
	I1013 22:03:46.279344  484490 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-843554
	I1013 22:03:46.297002  484490 provision.go:143] copyHostCerts
	I1013 22:03:46.297075  484490 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-226873/.minikube/ca.pem, removing ...
	I1013 22:03:46.297089  484490 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-226873/.minikube/ca.pem
	I1013 22:03:46.297160  484490 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-226873/.minikube/ca.pem (1078 bytes)
	I1013 22:03:46.297282  484490 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-226873/.minikube/cert.pem, removing ...
	I1013 22:03:46.297295  484490 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-226873/.minikube/cert.pem
	I1013 22:03:46.297326  484490 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-226873/.minikube/cert.pem (1123 bytes)
	I1013 22:03:46.297424  484490 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-226873/.minikube/key.pem, removing ...
	I1013 22:03:46.297433  484490 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-226873/.minikube/key.pem
	I1013 22:03:46.297458  484490 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-226873/.minikube/key.pem (1679 bytes)
	I1013 22:03:46.297513  484490 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-226873/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca-key.pem org=jenkins.newest-cni-843554 san=[127.0.0.1 192.168.94.2 localhost minikube newest-cni-843554]
	I1013 22:03:46.464762  484490 provision.go:177] copyRemoteCerts
	I1013 22:03:46.464825  484490 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1013 22:03:46.464863  484490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-843554
	I1013 22:03:46.483946  484490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/newest-cni-843554/id_rsa Username:docker}
	I1013 22:03:46.584978  484490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1013 22:03:46.605571  484490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1013 22:03:46.624155  484490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1013 22:03:46.642517  484490 provision.go:87] duration metric: took 363.219497ms to configureAuth
	I1013 22:03:46.642544  484490 ubuntu.go:206] setting minikube options for container-runtime
	I1013 22:03:46.642726  484490 config.go:182] Loaded profile config "newest-cni-843554": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:03:46.642860  484490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-843554
	I1013 22:03:46.660981  484490 main.go:141] libmachine: Using SSH client type: native
	I1013 22:03:46.661280  484490 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1013 22:03:46.661309  484490 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1013 22:03:46.923912  484490 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1013 22:03:46.923945  484490 machine.go:96] duration metric: took 1.154833734s to provisionDockerMachine
	I1013 22:03:46.923960  484490 client.go:171] duration metric: took 6.852553107s to LocalClient.Create
	I1013 22:03:46.924102  484490 start.go:167] duration metric: took 6.852623673s to libmachine.API.Create "newest-cni-843554"
	I1013 22:03:46.924125  484490 start.go:293] postStartSetup for "newest-cni-843554" (driver="docker")
	I1013 22:03:46.924140  484490 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1013 22:03:46.924216  484490 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1013 22:03:46.924275  484490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-843554
	I1013 22:03:46.942648  484490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/newest-cni-843554/id_rsa Username:docker}
	I1013 22:03:47.049535  484490 ssh_runner.go:195] Run: cat /etc/os-release
	I1013 22:03:47.053688  484490 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1013 22:03:47.053722  484490 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1013 22:03:47.053737  484490 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-226873/.minikube/addons for local assets ...
	I1013 22:03:47.053800  484490 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-226873/.minikube/files for local assets ...
	I1013 22:03:47.053900  484490 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-226873/.minikube/files/etc/ssl/certs/2309292.pem -> 2309292.pem in /etc/ssl/certs
	I1013 22:03:47.054053  484490 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1013 22:03:47.063687  484490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/files/etc/ssl/certs/2309292.pem --> /etc/ssl/certs/2309292.pem (1708 bytes)
	I1013 22:03:47.086898  484490 start.go:296] duration metric: took 162.754626ms for postStartSetup
	I1013 22:03:47.087348  484490 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-843554
	I1013 22:03:47.105824  484490 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/newest-cni-843554/config.json ...
	I1013 22:03:47.106168  484490 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1013 22:03:47.106225  484490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-843554
	I1013 22:03:47.125215  484490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/newest-cni-843554/id_rsa Username:docker}
	I1013 22:03:47.222639  484490 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1013 22:03:47.227465  484490 start.go:128] duration metric: took 7.159265299s to createHost
	I1013 22:03:47.227489  484490 start.go:83] releasing machines lock for "newest-cni-843554", held for 7.159444146s
	I1013 22:03:47.227552  484490 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-843554
	I1013 22:03:47.245500  484490 ssh_runner.go:195] Run: cat /version.json
	I1013 22:03:47.245554  484490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-843554
	I1013 22:03:47.245598  484490 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1013 22:03:47.245692  484490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-843554
	I1013 22:03:47.264930  484490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/newest-cni-843554/id_rsa Username:docker}
	I1013 22:03:47.265089  484490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/newest-cni-843554/id_rsa Username:docker}
	I1013 22:03:47.378822  484490 ssh_runner.go:195] Run: systemctl --version
	I1013 22:03:47.469539  484490 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1013 22:03:47.508492  484490 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1013 22:03:47.513761  484490 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1013 22:03:47.513836  484490 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1013 22:03:47.552336  484490 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1013 22:03:47.552364  484490 start.go:495] detecting cgroup driver to use...
	I1013 22:03:47.552405  484490 detect.go:190] detected "systemd" cgroup driver on host os
	I1013 22:03:47.552458  484490 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1013 22:03:47.570253  484490 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1013 22:03:47.585423  484490 docker.go:218] disabling cri-docker service (if available) ...
	I1013 22:03:47.585487  484490 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1013 22:03:47.604505  484490 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1013 22:03:47.624493  484490 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1013 22:03:47.711886  484490 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1013 22:03:47.808332  484490 docker.go:234] disabling docker service ...
	I1013 22:03:47.808406  484490 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1013 22:03:47.831438  484490 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1013 22:03:47.846564  484490 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1013 22:03:47.936209  484490 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1013 22:03:48.024401  484490 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1013 22:03:48.039068  484490 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1013 22:03:48.055762  484490 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1013 22:03:48.055833  484490 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:03:48.068981  484490 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1013 22:03:48.069065  484490 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:03:48.079720  484490 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:03:48.089729  484490 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:03:48.101605  484490 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1013 22:03:48.113238  484490 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:03:48.124242  484490 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:03:48.141603  484490 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:03:48.151952  484490 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1013 22:03:48.161324  484490 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1013 22:03:48.171468  484490 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:03:48.262062  484490 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1013 22:03:48.384584  484490 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1013 22:03:48.384675  484490 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1013 22:03:48.389029  484490 start.go:563] Will wait 60s for crictl version
	I1013 22:03:48.389089  484490 ssh_runner.go:195] Run: which crictl
	I1013 22:03:48.393403  484490 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1013 22:03:48.422366  484490 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1013 22:03:48.422453  484490 ssh_runner.go:195] Run: crio --version
	I1013 22:03:48.459752  484490 ssh_runner.go:195] Run: crio --version
	I1013 22:03:48.496533  484490 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1013 22:03:48.498541  484490 cli_runner.go:164] Run: docker network inspect newest-cni-843554 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 22:03:48.518781  484490 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1013 22:03:48.523724  484490 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 22:03:48.539804  484490 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1013 22:03:48.541518  484490 kubeadm.go:883] updating cluster {Name:newest-cni-843554 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-843554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1013 22:03:48.541684  484490 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 22:03:48.541758  484490 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 22:03:48.589195  484490 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 22:03:48.589221  484490 crio.go:433] Images already preloaded, skipping extraction
	I1013 22:03:48.589276  484490 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 22:03:48.619839  484490 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 22:03:48.619867  484490 cache_images.go:85] Images are preloaded, skipping loading
	I1013 22:03:48.619877  484490 kubeadm.go:934] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1013 22:03:48.620027  484490 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-843554 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-843554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1013 22:03:48.620125  484490 ssh_runner.go:195] Run: crio config
	I1013 22:03:48.675020  484490 cni.go:84] Creating CNI manager for ""
	I1013 22:03:48.675054  484490 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 22:03:48.675082  484490 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1013 22:03:48.675113  484490 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-843554 NodeName:newest-cni-843554 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1013 22:03:48.675349  484490 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-843554"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1013 22:03:48.675426  484490 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1013 22:03:48.685765  484490 binaries.go:44] Found k8s binaries, skipping transfer
	I1013 22:03:48.685840  484490 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1013 22:03:48.700492  484490 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1013 22:03:48.715480  484490 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1013 22:03:48.736843  484490 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1013 22:03:48.753852  484490 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1013 22:03:48.758556  484490 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 22:03:48.770251  484490 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:03:48.877354  484490 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 22:03:48.902559  484490 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/newest-cni-843554 for IP: 192.168.94.2
	I1013 22:03:48.902585  484490 certs.go:195] generating shared ca certs ...
	I1013 22:03:48.902609  484490 certs.go:227] acquiring lock for ca certs: {Name:mk5abdb742abbab05bc35d961f97579f54806d56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:03:48.902877  484490 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-226873/.minikube/ca.key
	I1013 22:03:48.902985  484490 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-226873/.minikube/proxy-client-ca.key
	I1013 22:03:48.903020  484490 certs.go:257] generating profile certs ...
	I1013 22:03:48.903097  484490 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/newest-cni-843554/client.key
	I1013 22:03:48.903126  484490 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/newest-cni-843554/client.crt with IP's: []
	I1013 22:03:49.057720  484490 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/newest-cni-843554/client.crt ...
	I1013 22:03:49.057751  484490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/newest-cni-843554/client.crt: {Name:mk7b8adddbfe017f323f38ba72916ea92982169d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:03:49.057949  484490 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/newest-cni-843554/client.key ...
	I1013 22:03:49.057965  484490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/newest-cni-843554/client.key: {Name:mk440f864649a393170c3a076e9f3a5d9875385d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:03:49.058104  484490 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/newest-cni-843554/apiserver.key.20622c83
	I1013 22:03:49.058124  484490 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/newest-cni-843554/apiserver.crt.20622c83 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1013 22:03:49.214169  484490 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/newest-cni-843554/apiserver.crt.20622c83 ...
	I1013 22:03:49.214201  484490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/newest-cni-843554/apiserver.crt.20622c83: {Name:mkef5e95e537af606c6578cec70e1202f77a6fc4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:03:49.214360  484490 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/newest-cni-843554/apiserver.key.20622c83 ...
	I1013 22:03:49.214373  484490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/newest-cni-843554/apiserver.key.20622c83: {Name:mka0c547785c945644da162e5224e48ce3abdc52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:03:49.214444  484490 certs.go:382] copying /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/newest-cni-843554/apiserver.crt.20622c83 -> /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/newest-cni-843554/apiserver.crt
	I1013 22:03:49.214538  484490 certs.go:386] copying /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/newest-cni-843554/apiserver.key.20622c83 -> /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/newest-cni-843554/apiserver.key
	I1013 22:03:49.214602  484490 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/newest-cni-843554/proxy-client.key
	I1013 22:03:49.214619  484490 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/newest-cni-843554/proxy-client.crt with IP's: []
	I1013 22:03:49.321721  484490 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/newest-cni-843554/proxy-client.crt ...
	I1013 22:03:49.321754  484490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/newest-cni-843554/proxy-client.crt: {Name:mkd0a7460df55f794e99e82014d619b44d916362 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:03:49.321922  484490 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/newest-cni-843554/proxy-client.key ...
	I1013 22:03:49.321936  484490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/newest-cni-843554/proxy-client.key: {Name:mk1bcdb3df4e29f352f461649b9c23e45dfbcd8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:03:49.322144  484490 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/230929.pem (1338 bytes)
	W1013 22:03:49.322182  484490 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-226873/.minikube/certs/230929_empty.pem, impossibly tiny 0 bytes
	I1013 22:03:49.322192  484490 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca-key.pem (1675 bytes)
	I1013 22:03:49.322260  484490 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem (1078 bytes)
	I1013 22:03:49.322289  484490 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/cert.pem (1123 bytes)
	I1013 22:03:49.322308  484490 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/key.pem (1679 bytes)
	I1013 22:03:49.322345  484490 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/files/etc/ssl/certs/2309292.pem (1708 bytes)
	I1013 22:03:49.322872  484490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1013 22:03:49.345608  484490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1013 22:03:49.365381  484490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1013 22:03:49.386555  484490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1013 22:03:49.408327  484490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/newest-cni-843554/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1013 22:03:49.427607  484490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/newest-cni-843554/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1013 22:03:49.448524  484490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/newest-cni-843554/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1013 22:03:49.468469  484490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/newest-cni-843554/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1013 22:03:49.488745  484490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/files/etc/ssl/certs/2309292.pem --> /usr/share/ca-certificates/2309292.pem (1708 bytes)
	I1013 22:03:49.512177  484490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1013 22:03:49.532121  484490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/certs/230929.pem --> /usr/share/ca-certificates/230929.pem (1338 bytes)
	I1013 22:03:49.552092  484490 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1013 22:03:49.565826  484490 ssh_runner.go:195] Run: openssl version
	I1013 22:03:49.572548  484490 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1013 22:03:49.582259  484490 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:03:49.587260  484490 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 13 21:18 /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:03:49.587333  484490 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:03:49.623399  484490 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1013 22:03:49.633159  484490 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/230929.pem && ln -fs /usr/share/ca-certificates/230929.pem /etc/ssl/certs/230929.pem"
	I1013 22:03:49.642444  484490 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/230929.pem
	I1013 22:03:49.646726  484490 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 13 21:24 /usr/share/ca-certificates/230929.pem
	I1013 22:03:49.646797  484490 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/230929.pem
	I1013 22:03:49.693448  484490 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/230929.pem /etc/ssl/certs/51391683.0"
	I1013 22:03:49.703828  484490 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2309292.pem && ln -fs /usr/share/ca-certificates/2309292.pem /etc/ssl/certs/2309292.pem"
	I1013 22:03:49.714522  484490 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2309292.pem
	I1013 22:03:49.719340  484490 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 13 21:24 /usr/share/ca-certificates/2309292.pem
	I1013 22:03:49.719406  484490 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2309292.pem
	I1013 22:03:49.755560  484490 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2309292.pem /etc/ssl/certs/3ec20f2e.0"
	I1013 22:03:49.766938  484490 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1013 22:03:49.771346  484490 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1013 22:03:49.771419  484490 kubeadm.go:400] StartCluster: {Name:newest-cni-843554 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-843554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 22:03:49.771494  484490 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 22:03:49.771549  484490 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 22:03:49.803164  484490 cri.go:89] found id: ""
	I1013 22:03:49.803236  484490 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1013 22:03:49.812701  484490 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1013 22:03:49.821923  484490 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1013 22:03:49.822014  484490 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1013 22:03:49.831379  484490 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1013 22:03:49.831400  484490 kubeadm.go:157] found existing configuration files:
	
	I1013 22:03:49.831460  484490 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1013 22:03:49.841491  484490 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1013 22:03:49.841561  484490 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1013 22:03:49.851154  484490 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1013 22:03:49.860270  484490 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1013 22:03:49.860346  484490 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1013 22:03:49.868788  484490 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1013 22:03:49.877706  484490 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1013 22:03:49.877797  484490 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1013 22:03:49.886425  484490 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1013 22:03:49.895971  484490 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1013 22:03:49.896067  484490 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1013 22:03:49.904646  484490 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1013 22:03:49.943671  484490 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1013 22:03:49.943748  484490 kubeadm.go:318] [preflight] Running pre-flight checks
	I1013 22:03:49.968544  484490 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1013 22:03:49.968630  484490 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1013 22:03:49.968729  484490 kubeadm.go:318] OS: Linux
	I1013 22:03:49.968806  484490 kubeadm.go:318] CGROUPS_CPU: enabled
	I1013 22:03:49.968882  484490 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1013 22:03:49.968956  484490 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1013 22:03:49.969052  484490 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1013 22:03:49.969122  484490 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1013 22:03:49.969195  484490 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1013 22:03:49.969294  484490 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1013 22:03:49.969375  484490 kubeadm.go:318] CGROUPS_IO: enabled
	I1013 22:03:50.036448  484490 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1013 22:03:50.036634  484490 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1013 22:03:50.036804  484490 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1013 22:03:50.045449  484490 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1013 22:03:46.829086  477441 node_ready.go:57] node "default-k8s-diff-port-505851" has "Ready":"False" status (will retry)
	W1013 22:03:49.329207  477441 node_ready.go:57] node "default-k8s-diff-port-505851" has "Ready":"False" status (will retry)
	W1013 22:03:47.897468  476377 node_ready.go:57] node "embed-certs-521669" has "Ready":"False" status (will retry)
	W1013 22:03:50.396885  476377 node_ready.go:57] node "embed-certs-521669" has "Ready":"False" status (will retry)
	I1013 22:03:48.250461  487583 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1013 22:03:48.250685  487583 start.go:159] libmachine.API.Create for "auto-200102" (driver="docker")
	I1013 22:03:48.250716  487583 client.go:168] LocalClient.Create starting
	I1013 22:03:48.250779  487583 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem
	I1013 22:03:48.250824  487583 main.go:141] libmachine: Decoding PEM data...
	I1013 22:03:48.250850  487583 main.go:141] libmachine: Parsing certificate...
	I1013 22:03:48.250920  487583 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21724-226873/.minikube/certs/cert.pem
	I1013 22:03:48.250945  487583 main.go:141] libmachine: Decoding PEM data...
	I1013 22:03:48.250955  487583 main.go:141] libmachine: Parsing certificate...
	I1013 22:03:48.251378  487583 cli_runner.go:164] Run: docker network inspect auto-200102 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1013 22:03:48.269584  487583 cli_runner.go:211] docker network inspect auto-200102 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1013 22:03:48.269669  487583 network_create.go:284] running [docker network inspect auto-200102] to gather additional debugging logs...
	I1013 22:03:48.269693  487583 cli_runner.go:164] Run: docker network inspect auto-200102
	W1013 22:03:48.290081  487583 cli_runner.go:211] docker network inspect auto-200102 returned with exit code 1
	I1013 22:03:48.290118  487583 network_create.go:287] error running [docker network inspect auto-200102]: docker network inspect auto-200102: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-200102 not found
	I1013 22:03:48.290134  487583 network_create.go:289] output of [docker network inspect auto-200102]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-200102 not found
	
	** /stderr **
	I1013 22:03:48.290257  487583 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 22:03:48.308172  487583 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7d83a8e6a805 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ea:69:47:54:f9:98} reservation:<nil>}
	I1013 22:03:48.308839  487583 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-35c0cecee577 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:f2:41:bc:f8:12:32} reservation:<nil>}
	I1013 22:03:48.309630  487583 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-2e951fbeb08e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ba:fb:be:51:da:97} reservation:<nil>}
	I1013 22:03:48.310415  487583 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-bd127c16ad94 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:da:91:d2:e9:26:c1} reservation:<nil>}
	I1013 22:03:48.311447  487583 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001fb4820}
	I1013 22:03:48.311476  487583 network_create.go:124] attempt to create docker network auto-200102 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1013 22:03:48.311546  487583 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-200102 auto-200102
	I1013 22:03:48.377591  487583 network_create.go:108] docker network auto-200102 192.168.85.0/24 created
	I1013 22:03:48.377626  487583 kic.go:121] calculated static IP "192.168.85.2" for the "auto-200102" container
	I1013 22:03:48.377682  487583 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1013 22:03:48.398421  487583 cli_runner.go:164] Run: docker volume create auto-200102 --label name.minikube.sigs.k8s.io=auto-200102 --label created_by.minikube.sigs.k8s.io=true
	I1013 22:03:48.418179  487583 oci.go:103] Successfully created a docker volume auto-200102
	I1013 22:03:48.418284  487583 cli_runner.go:164] Run: docker run --rm --name auto-200102-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-200102 --entrypoint /usr/bin/test -v auto-200102:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -d /var/lib
	I1013 22:03:48.836404  487583 oci.go:107] Successfully prepared a docker volume auto-200102
	I1013 22:03:48.836446  487583 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 22:03:48.836473  487583 kic.go:194] Starting extracting preloaded images to volume ...
	I1013 22:03:48.836531  487583 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21724-226873/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v auto-200102:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -I lz4 -xf /preloaded.tar -C /extractDir
	I1013 22:03:50.048505  484490 out.go:252]   - Generating certificates and keys ...
	I1013 22:03:50.048632  484490 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1013 22:03:50.048750  484490 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1013 22:03:50.091988  484490 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1013 22:03:50.812126  484490 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1013 22:03:51.090226  484490 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1013 22:03:51.169725  484490 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1013 22:03:51.456659  484490 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1013 22:03:51.456791  484490 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-843554] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1013 22:03:51.756078  484490 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1013 22:03:51.756203  484490 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-843554] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1013 22:03:52.153252  484490 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1013 22:03:52.289647  484490 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1013 22:03:52.667517  484490 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1013 22:03:52.667604  484490 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1013 22:03:52.883918  484490 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1013 22:03:52.962553  484490 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1013 22:03:53.337169  484490 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1013 22:03:54.049057  484490 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1013 22:03:54.118769  484490 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1013 22:03:54.118915  484490 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1013 22:03:54.126793  484490 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1013 22:03:54.129568  484490 out.go:252]   - Booting up control plane ...
	I1013 22:03:54.129744  484490 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1013 22:03:54.129875  484490 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1013 22:03:54.130021  484490 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1013 22:03:54.160861  484490 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1013 22:03:54.161068  484490 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1013 22:03:54.170584  484490 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1013 22:03:54.171370  484490 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1013 22:03:54.171506  484490 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1013 22:03:54.286488  484490 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1013 22:03:54.286636  484490 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	W1013 22:03:51.829574  477441 node_ready.go:57] node "default-k8s-diff-port-505851" has "Ready":"False" status (will retry)
	W1013 22:03:54.328986  477441 node_ready.go:57] node "default-k8s-diff-port-505851" has "Ready":"False" status (will retry)
	I1013 22:03:55.329559  477441 node_ready.go:49] node "default-k8s-diff-port-505851" is "Ready"
	I1013 22:03:55.329588  477441 node_ready.go:38] duration metric: took 10.503950666s for node "default-k8s-diff-port-505851" to be "Ready" ...
	I1013 22:03:55.329604  477441 api_server.go:52] waiting for apiserver process to appear ...
	I1013 22:03:55.329651  477441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 22:03:55.342935  477441 api_server.go:72] duration metric: took 11.238841693s to wait for apiserver process to appear ...
	I1013 22:03:55.342963  477441 api_server.go:88] waiting for apiserver healthz status ...
	I1013 22:03:55.342986  477441 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1013 22:03:55.348851  477441 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1013 22:03:55.349977  477441 api_server.go:141] control plane version: v1.34.1
	I1013 22:03:55.350033  477441 api_server.go:131] duration metric: took 7.06078ms to wait for apiserver health ...
	I1013 22:03:55.350046  477441 system_pods.go:43] waiting for kube-system pods to appear ...
	I1013 22:03:55.353189  477441 system_pods.go:59] 8 kube-system pods found
	I1013 22:03:55.353230  477441 system_pods.go:61] "coredns-66bc5c9577-5x8dn" [2b78411d-d81f-4b88-9a8d-921f7c26ec16] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 22:03:55.353251  477441 system_pods.go:61] "etcd-default-k8s-diff-port-505851" [aed8b3be-779b-41fa-a0a3-d935cdc6ad0b] Running
	I1013 22:03:55.353264  477441 system_pods.go:61] "kindnet-m5whc" [f794ce45-bb06-44ce-beae-bffe3ff9d2c0] Running
	I1013 22:03:55.353273  477441 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-505851" [d7c818e1-b20b-40aa-afe6-7032c378c841] Running
	I1013 22:03:55.353282  477441 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-505851" [d6f5cccc-8810-4862-9add-7319d03ca442] Running
	I1013 22:03:55.353384  477441 system_pods.go:61] "kube-proxy-27pnt" [3cb84f83-962c-4830-bdad-0084bc59a7c4] Running
	I1013 22:03:55.353393  477441 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-505851" [7481baab-00e9-4015-bf26-4e389a1bf472] Running
	I1013 22:03:55.353405  477441 system_pods.go:61] "storage-provisioner" [2b8d56b5-894f-44d4-8b07-d3507c981fc0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 22:03:55.353422  477441 system_pods.go:74] duration metric: took 3.363327ms to wait for pod list to return data ...
	I1013 22:03:55.353439  477441 default_sa.go:34] waiting for default service account to be created ...
	I1013 22:03:55.355859  477441 default_sa.go:45] found service account: "default"
	I1013 22:03:55.355880  477441 default_sa.go:55] duration metric: took 2.430374ms for default service account to be created ...
	I1013 22:03:55.355889  477441 system_pods.go:116] waiting for k8s-apps to be running ...
	I1013 22:03:55.358605  477441 system_pods.go:86] 8 kube-system pods found
	I1013 22:03:55.358632  477441 system_pods.go:89] "coredns-66bc5c9577-5x8dn" [2b78411d-d81f-4b88-9a8d-921f7c26ec16] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 22:03:55.358639  477441 system_pods.go:89] "etcd-default-k8s-diff-port-505851" [aed8b3be-779b-41fa-a0a3-d935cdc6ad0b] Running
	I1013 22:03:55.358645  477441 system_pods.go:89] "kindnet-m5whc" [f794ce45-bb06-44ce-beae-bffe3ff9d2c0] Running
	I1013 22:03:55.358651  477441 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-505851" [d7c818e1-b20b-40aa-afe6-7032c378c841] Running
	I1013 22:03:55.358659  477441 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-505851" [d6f5cccc-8810-4862-9add-7319d03ca442] Running
	I1013 22:03:55.358669  477441 system_pods.go:89] "kube-proxy-27pnt" [3cb84f83-962c-4830-bdad-0084bc59a7c4] Running
	I1013 22:03:55.358673  477441 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-505851" [7481baab-00e9-4015-bf26-4e389a1bf472] Running
	I1013 22:03:55.358680  477441 system_pods.go:89] "storage-provisioner" [2b8d56b5-894f-44d4-8b07-d3507c981fc0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 22:03:55.358716  477441 retry.go:31] will retry after 253.55772ms: missing components: kube-dns
	I1013 22:03:55.619570  477441 system_pods.go:86] 8 kube-system pods found
	I1013 22:03:55.619654  477441 system_pods.go:89] "coredns-66bc5c9577-5x8dn" [2b78411d-d81f-4b88-9a8d-921f7c26ec16] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 22:03:55.619702  477441 system_pods.go:89] "etcd-default-k8s-diff-port-505851" [aed8b3be-779b-41fa-a0a3-d935cdc6ad0b] Running
	I1013 22:03:55.619721  477441 system_pods.go:89] "kindnet-m5whc" [f794ce45-bb06-44ce-beae-bffe3ff9d2c0] Running
	I1013 22:03:55.619728  477441 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-505851" [d7c818e1-b20b-40aa-afe6-7032c378c841] Running
	I1013 22:03:55.619733  477441 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-505851" [d6f5cccc-8810-4862-9add-7319d03ca442] Running
	I1013 22:03:55.619743  477441 system_pods.go:89] "kube-proxy-27pnt" [3cb84f83-962c-4830-bdad-0084bc59a7c4] Running
	I1013 22:03:55.619757  477441 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-505851" [7481baab-00e9-4015-bf26-4e389a1bf472] Running
	I1013 22:03:55.619787  477441 system_pods.go:89] "storage-provisioner" [2b8d56b5-894f-44d4-8b07-d3507c981fc0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 22:03:55.619814  477441 retry.go:31] will retry after 279.508132ms: missing components: kube-dns
	W1013 22:03:52.897348  476377 node_ready.go:57] node "embed-certs-521669" has "Ready":"False" status (will retry)
	W1013 22:03:55.398548  476377 node_ready.go:57] node "embed-certs-521669" has "Ready":"False" status (will retry)
	I1013 22:03:53.444083  487583 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21724-226873/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v auto-200102:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -I lz4 -xf /preloaded.tar -C /extractDir: (4.60749131s)
	I1013 22:03:53.444126  487583 kic.go:203] duration metric: took 4.607647301s to extract preloaded images to volume ...
	W1013 22:03:53.444253  487583 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1013 22:03:53.444294  487583 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1013 22:03:53.444355  487583 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1013 22:03:53.505540  487583 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-200102 --name auto-200102 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-200102 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-200102 --network auto-200102 --ip 192.168.85.2 --volume auto-200102:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225
	I1013 22:03:53.797795  487583 cli_runner.go:164] Run: docker container inspect auto-200102 --format={{.State.Running}}
	I1013 22:03:53.818073  487583 cli_runner.go:164] Run: docker container inspect auto-200102 --format={{.State.Status}}
	I1013 22:03:53.838163  487583 cli_runner.go:164] Run: docker exec auto-200102 stat /var/lib/dpkg/alternatives/iptables
	I1013 22:03:53.888227  487583 oci.go:144] the created container "auto-200102" has a running status.
	I1013 22:03:53.888268  487583 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21724-226873/.minikube/machines/auto-200102/id_rsa...
	I1013 22:03:54.127683  487583 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21724-226873/.minikube/machines/auto-200102/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1013 22:03:54.165534  487583 cli_runner.go:164] Run: docker container inspect auto-200102 --format={{.State.Status}}
	I1013 22:03:54.190445  487583 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1013 22:03:54.190471  487583 kic_runner.go:114] Args: [docker exec --privileged auto-200102 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1013 22:03:54.250288  487583 cli_runner.go:164] Run: docker container inspect auto-200102 --format={{.State.Status}}
	I1013 22:03:54.271713  487583 machine.go:93] provisionDockerMachine start ...
	I1013 22:03:54.271839  487583 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-200102
	I1013 22:03:54.294100  487583 main.go:141] libmachine: Using SSH client type: native
	I1013 22:03:54.294394  487583 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1013 22:03:54.294410  487583 main.go:141] libmachine: About to run SSH command:
	hostname
	I1013 22:03:54.440486  487583 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-200102
	
	I1013 22:03:54.440521  487583 ubuntu.go:182] provisioning hostname "auto-200102"
	I1013 22:03:54.440599  487583 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-200102
	I1013 22:03:54.460711  487583 main.go:141] libmachine: Using SSH client type: native
	I1013 22:03:54.461103  487583 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1013 22:03:54.461129  487583 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-200102 && echo "auto-200102" | sudo tee /etc/hostname
	I1013 22:03:54.614535  487583 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-200102
	
	I1013 22:03:54.614622  487583 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-200102
	I1013 22:03:54.634586  487583 main.go:141] libmachine: Using SSH client type: native
	I1013 22:03:54.634913  487583 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1013 22:03:54.634939  487583 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-200102' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-200102/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-200102' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1013 22:03:54.775376  487583 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 22:03:54.775453  487583 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21724-226873/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-226873/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-226873/.minikube}
	I1013 22:03:54.775507  487583 ubuntu.go:190] setting up certificates
	I1013 22:03:54.775525  487583 provision.go:84] configureAuth start
	I1013 22:03:54.775607  487583 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-200102
	I1013 22:03:54.794893  487583 provision.go:143] copyHostCerts
	I1013 22:03:54.794955  487583 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-226873/.minikube/ca.pem, removing ...
	I1013 22:03:54.794966  487583 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-226873/.minikube/ca.pem
	I1013 22:03:54.795082  487583 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-226873/.minikube/ca.pem (1078 bytes)
	I1013 22:03:54.795182  487583 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-226873/.minikube/cert.pem, removing ...
	I1013 22:03:54.795192  487583 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-226873/.minikube/cert.pem
	I1013 22:03:54.795220  487583 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-226873/.minikube/cert.pem (1123 bytes)
	I1013 22:03:54.795279  487583 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-226873/.minikube/key.pem, removing ...
	I1013 22:03:54.795286  487583 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-226873/.minikube/key.pem
	I1013 22:03:54.795308  487583 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-226873/.minikube/key.pem (1679 bytes)
	I1013 22:03:54.795376  487583 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-226873/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca-key.pem org=jenkins.auto-200102 san=[127.0.0.1 192.168.85.2 auto-200102 localhost minikube]
	I1013 22:03:55.188429  487583 provision.go:177] copyRemoteCerts
	I1013 22:03:55.188510  487583 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1013 22:03:55.188566  487583 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-200102
	I1013 22:03:55.208580  487583 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/auto-200102/id_rsa Username:docker}
	I1013 22:03:55.315410  487583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1013 22:03:55.338226  487583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1013 22:03:55.360346  487583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1013 22:03:55.380477  487583 provision.go:87] duration metric: took 604.930225ms to configureAuth
	I1013 22:03:55.380510  487583 ubuntu.go:206] setting minikube options for container-runtime
	I1013 22:03:55.380713  487583 config.go:182] Loaded profile config "auto-200102": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:03:55.380859  487583 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-200102
	I1013 22:03:55.403380  487583 main.go:141] libmachine: Using SSH client type: native
	I1013 22:03:55.403687  487583 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1013 22:03:55.403708  487583 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1013 22:03:55.708397  487583 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1013 22:03:55.708428  487583 machine.go:96] duration metric: took 1.436678744s to provisionDockerMachine
	I1013 22:03:55.708441  487583 client.go:171] duration metric: took 7.457718927s to LocalClient.Create
	I1013 22:03:55.708465  487583 start.go:167] duration metric: took 7.457781344s to libmachine.API.Create "auto-200102"
	I1013 22:03:55.708474  487583 start.go:293] postStartSetup for "auto-200102" (driver="docker")
	I1013 22:03:55.708486  487583 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1013 22:03:55.708549  487583 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1013 22:03:55.708593  487583 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-200102
	I1013 22:03:55.731147  487583 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/auto-200102/id_rsa Username:docker}
	I1013 22:03:55.835461  487583 ssh_runner.go:195] Run: cat /etc/os-release
	I1013 22:03:55.839937  487583 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1013 22:03:55.839973  487583 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1013 22:03:55.839987  487583 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-226873/.minikube/addons for local assets ...
	I1013 22:03:55.840062  487583 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-226873/.minikube/files for local assets ...
	I1013 22:03:55.840155  487583 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-226873/.minikube/files/etc/ssl/certs/2309292.pem -> 2309292.pem in /etc/ssl/certs
	I1013 22:03:55.840296  487583 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1013 22:03:55.849304  487583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/files/etc/ssl/certs/2309292.pem --> /etc/ssl/certs/2309292.pem (1708 bytes)
	I1013 22:03:55.874132  487583 start.go:296] duration metric: took 165.640662ms for postStartSetup
	I1013 22:03:55.874609  487583 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-200102
	I1013 22:03:55.895464  487583 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/auto-200102/config.json ...
	I1013 22:03:55.895821  487583 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1013 22:03:55.895876  487583 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-200102
	I1013 22:03:55.918553  487583 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/auto-200102/id_rsa Username:docker}
	I1013 22:03:56.017585  487583 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1013 22:03:56.022832  487583 start.go:128] duration metric: took 7.775200902s to createHost
	I1013 22:03:56.022863  487583 start.go:83] releasing machines lock for "auto-200102", held for 7.775332897s
	I1013 22:03:56.022941  487583 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-200102
	I1013 22:03:56.042596  487583 ssh_runner.go:195] Run: cat /version.json
	I1013 22:03:56.042662  487583 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-200102
	I1013 22:03:56.042670  487583 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1013 22:03:56.042775  487583 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-200102
	I1013 22:03:56.063800  487583 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/auto-200102/id_rsa Username:docker}
	I1013 22:03:56.064255  487583 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/auto-200102/id_rsa Username:docker}
	I1013 22:03:56.159743  487583 ssh_runner.go:195] Run: systemctl --version
	I1013 22:03:56.225451  487583 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1013 22:03:56.267876  487583 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1013 22:03:56.273025  487583 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1013 22:03:56.273101  487583 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1013 22:03:56.301157  487583 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1013 22:03:56.301183  487583 start.go:495] detecting cgroup driver to use...
	I1013 22:03:56.301217  487583 detect.go:190] detected "systemd" cgroup driver on host os
	I1013 22:03:56.301264  487583 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1013 22:03:56.320515  487583 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1013 22:03:56.335527  487583 docker.go:218] disabling cri-docker service (if available) ...
	I1013 22:03:56.335597  487583 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1013 22:03:56.356535  487583 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1013 22:03:56.379123  487583 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1013 22:03:56.511060  487583 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1013 22:03:56.630403  487583 docker.go:234] disabling docker service ...
	I1013 22:03:56.630478  487583 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1013 22:03:56.654697  487583 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1013 22:03:56.669666  487583 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1013 22:03:56.772800  487583 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1013 22:03:56.867300  487583 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1013 22:03:56.883388  487583 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1013 22:03:56.902552  487583 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1013 22:03:56.902621  487583 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:03:56.914797  487583 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1013 22:03:56.914859  487583 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:03:56.924554  487583 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:03:56.934706  487583 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:03:56.944950  487583 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1013 22:03:56.954161  487583 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:03:56.963680  487583 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:03:56.978238  487583 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:03:56.988376  487583 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1013 22:03:56.997805  487583 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1013 22:03:57.008209  487583 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:03:57.102217  487583 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1013 22:03:57.219578  487583 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1013 22:03:57.219638  487583 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1013 22:03:57.224855  487583 start.go:563] Will wait 60s for crictl version
	I1013 22:03:57.224920  487583 ssh_runner.go:195] Run: which crictl
	I1013 22:03:57.228808  487583 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1013 22:03:57.258192  487583 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1013 22:03:57.258284  487583 ssh_runner.go:195] Run: crio --version
	I1013 22:03:57.294146  487583 ssh_runner.go:195] Run: crio --version
	I1013 22:03:57.330295  487583 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1013 22:03:57.331555  487583 cli_runner.go:164] Run: docker network inspect auto-200102 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 22:03:57.352700  487583 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1013 22:03:57.357856  487583 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 22:03:57.370430  487583 kubeadm.go:883] updating cluster {Name:auto-200102 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-200102 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1013 22:03:57.370591  487583 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 22:03:57.370677  487583 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 22:03:57.412033  487583 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 22:03:57.412066  487583 crio.go:433] Images already preloaded, skipping extraction
	I1013 22:03:57.412128  487583 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 22:03:57.446310  487583 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 22:03:57.446335  487583 cache_images.go:85] Images are preloaded, skipping loading
	I1013 22:03:57.446346  487583 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1013 22:03:57.446458  487583 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-200102 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-200102 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1013 22:03:57.446545  487583 ssh_runner.go:195] Run: crio config
	I1013 22:03:57.502468  487583 cni.go:84] Creating CNI manager for ""
	I1013 22:03:57.502491  487583 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 22:03:57.502510  487583 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1013 22:03:57.502535  487583 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-200102 NodeName:auto-200102 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1013 22:03:57.502675  487583 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-200102"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1013 22:03:57.502750  487583 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1013 22:03:57.514371  487583 binaries.go:44] Found k8s binaries, skipping transfer
	I1013 22:03:57.514451  487583 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1013 22:03:57.523122  487583 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I1013 22:03:57.536941  487583 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1013 22:03:57.554920  487583 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2207 bytes)
	I1013 22:03:57.571949  487583 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1013 22:03:57.576445  487583 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 22:03:57.588402  487583 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:03:57.676518  487583 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 22:03:57.702875  487583 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/auto-200102 for IP: 192.168.85.2
	I1013 22:03:57.702904  487583 certs.go:195] generating shared ca certs ...
	I1013 22:03:57.702930  487583 certs.go:227] acquiring lock for ca certs: {Name:mk5abdb742abbab05bc35d961f97579f54806d56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:03:57.703216  487583 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-226873/.minikube/ca.key
	I1013 22:03:57.703262  487583 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-226873/.minikube/proxy-client-ca.key
	I1013 22:03:57.703278  487583 certs.go:257] generating profile certs ...
	I1013 22:03:57.703341  487583 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/auto-200102/client.key
	I1013 22:03:57.703367  487583 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/auto-200102/client.crt with IP's: []
	I1013 22:03:57.851026  487583 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/auto-200102/client.crt ...
	I1013 22:03:57.851064  487583 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/auto-200102/client.crt: {Name:mk2579bf978c11798ebce23c8f9b2443dab8b152 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:03:57.851280  487583 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/auto-200102/client.key ...
	I1013 22:03:57.851296  487583 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/auto-200102/client.key: {Name:mk7f8f06d5515585f3c94e35065c0da5eafac2de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:03:57.851424  487583 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/auto-200102/apiserver.key.443b2274
	I1013 22:03:57.851447  487583 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/auto-200102/apiserver.crt.443b2274 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1013 22:03:57.985116  487583 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/auto-200102/apiserver.crt.443b2274 ...
	I1013 22:03:57.985154  487583 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/auto-200102/apiserver.crt.443b2274: {Name:mk74c23f716571620a1007598ae871740882eb1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:03:57.985409  487583 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/auto-200102/apiserver.key.443b2274 ...
	I1013 22:03:57.985429  487583 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/auto-200102/apiserver.key.443b2274: {Name:mk194bd31c37146af9822ed9392ca6af9be4ed3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:03:57.985662  487583 certs.go:382] copying /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/auto-200102/apiserver.crt.443b2274 -> /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/auto-200102/apiserver.crt
	I1013 22:03:57.985848  487583 certs.go:386] copying /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/auto-200102/apiserver.key.443b2274 -> /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/auto-200102/apiserver.key
	I1013 22:03:57.985965  487583 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/auto-200102/proxy-client.key
	I1013 22:03:57.986011  487583 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/auto-200102/proxy-client.crt with IP's: []
	I1013 22:03:55.904621  477441 system_pods.go:86] 8 kube-system pods found
	I1013 22:03:55.904664  477441 system_pods.go:89] "coredns-66bc5c9577-5x8dn" [2b78411d-d81f-4b88-9a8d-921f7c26ec16] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 22:03:55.904673  477441 system_pods.go:89] "etcd-default-k8s-diff-port-505851" [aed8b3be-779b-41fa-a0a3-d935cdc6ad0b] Running
	I1013 22:03:55.904682  477441 system_pods.go:89] "kindnet-m5whc" [f794ce45-bb06-44ce-beae-bffe3ff9d2c0] Running
	I1013 22:03:55.904688  477441 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-505851" [d7c818e1-b20b-40aa-afe6-7032c378c841] Running
	I1013 22:03:55.904694  477441 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-505851" [d6f5cccc-8810-4862-9add-7319d03ca442] Running
	I1013 22:03:55.904699  477441 system_pods.go:89] "kube-proxy-27pnt" [3cb84f83-962c-4830-bdad-0084bc59a7c4] Running
	I1013 22:03:55.904704  477441 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-505851" [7481baab-00e9-4015-bf26-4e389a1bf472] Running
	I1013 22:03:55.904712  477441 system_pods.go:89] "storage-provisioner" [2b8d56b5-894f-44d4-8b07-d3507c981fc0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 22:03:55.904741  477441 retry.go:31] will retry after 412.348385ms: missing components: kube-dns
	I1013 22:03:56.321704  477441 system_pods.go:86] 8 kube-system pods found
	I1013 22:03:56.321754  477441 system_pods.go:89] "coredns-66bc5c9577-5x8dn" [2b78411d-d81f-4b88-9a8d-921f7c26ec16] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 22:03:56.321764  477441 system_pods.go:89] "etcd-default-k8s-diff-port-505851" [aed8b3be-779b-41fa-a0a3-d935cdc6ad0b] Running
	I1013 22:03:56.321778  477441 system_pods.go:89] "kindnet-m5whc" [f794ce45-bb06-44ce-beae-bffe3ff9d2c0] Running
	I1013 22:03:56.321785  477441 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-505851" [d7c818e1-b20b-40aa-afe6-7032c378c841] Running
	I1013 22:03:56.321795  477441 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-505851" [d6f5cccc-8810-4862-9add-7319d03ca442] Running
	I1013 22:03:56.321801  477441 system_pods.go:89] "kube-proxy-27pnt" [3cb84f83-962c-4830-bdad-0084bc59a7c4] Running
	I1013 22:03:56.321809  477441 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-505851" [7481baab-00e9-4015-bf26-4e389a1bf472] Running
	I1013 22:03:56.321818  477441 system_pods.go:89] "storage-provisioner" [2b8d56b5-894f-44d4-8b07-d3507c981fc0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 22:03:56.321842  477441 retry.go:31] will retry after 533.406261ms: missing components: kube-dns
	I1013 22:03:56.860110  477441 system_pods.go:86] 8 kube-system pods found
	I1013 22:03:56.860143  477441 system_pods.go:89] "coredns-66bc5c9577-5x8dn" [2b78411d-d81f-4b88-9a8d-921f7c26ec16] Running
	I1013 22:03:56.860151  477441 system_pods.go:89] "etcd-default-k8s-diff-port-505851" [aed8b3be-779b-41fa-a0a3-d935cdc6ad0b] Running
	I1013 22:03:56.860159  477441 system_pods.go:89] "kindnet-m5whc" [f794ce45-bb06-44ce-beae-bffe3ff9d2c0] Running
	I1013 22:03:56.860164  477441 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-505851" [d7c818e1-b20b-40aa-afe6-7032c378c841] Running
	I1013 22:03:56.860170  477441 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-505851" [d6f5cccc-8810-4862-9add-7319d03ca442] Running
	I1013 22:03:56.860175  477441 system_pods.go:89] "kube-proxy-27pnt" [3cb84f83-962c-4830-bdad-0084bc59a7c4] Running
	I1013 22:03:56.860180  477441 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-505851" [7481baab-00e9-4015-bf26-4e389a1bf472] Running
	I1013 22:03:56.860185  477441 system_pods.go:89] "storage-provisioner" [2b8d56b5-894f-44d4-8b07-d3507c981fc0] Running
	I1013 22:03:56.860197  477441 system_pods.go:126] duration metric: took 1.504300371s to wait for k8s-apps to be running ...
	I1013 22:03:56.860211  477441 system_svc.go:44] waiting for kubelet service to be running ....
	I1013 22:03:56.860265  477441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 22:03:56.876876  477441 system_svc.go:56] duration metric: took 16.654755ms WaitForService to wait for kubelet
	I1013 22:03:56.876916  477441 kubeadm.go:586] duration metric: took 12.772822907s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 22:03:56.876941  477441 node_conditions.go:102] verifying NodePressure condition ...
	I1013 22:03:56.879720  477441 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1013 22:03:56.879749  477441 node_conditions.go:123] node cpu capacity is 8
	I1013 22:03:56.879764  477441 node_conditions.go:105] duration metric: took 2.817207ms to run NodePressure ...
	I1013 22:03:56.879776  477441 start.go:241] waiting for startup goroutines ...
	I1013 22:03:56.879782  477441 start.go:246] waiting for cluster config update ...
	I1013 22:03:56.879793  477441 start.go:255] writing updated cluster config ...
	I1013 22:03:56.880082  477441 ssh_runner.go:195] Run: rm -f paused
	I1013 22:03:56.884439  477441 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 22:03:56.888364  477441 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-5x8dn" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:03:56.892803  477441 pod_ready.go:94] pod "coredns-66bc5c9577-5x8dn" is "Ready"
	I1013 22:03:56.892837  477441 pod_ready.go:86] duration metric: took 4.448764ms for pod "coredns-66bc5c9577-5x8dn" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:03:56.895141  477441 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-505851" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:03:56.899825  477441 pod_ready.go:94] pod "etcd-default-k8s-diff-port-505851" is "Ready"
	I1013 22:03:56.899856  477441 pod_ready.go:86] duration metric: took 4.683785ms for pod "etcd-default-k8s-diff-port-505851" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:03:56.901925  477441 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-505851" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:03:56.906108  477441 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-505851" is "Ready"
	I1013 22:03:56.906138  477441 pod_ready.go:86] duration metric: took 4.191264ms for pod "kube-apiserver-default-k8s-diff-port-505851" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:03:56.908355  477441 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-505851" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:03:57.289538  477441 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-505851" is "Ready"
	I1013 22:03:57.289578  477441 pod_ready.go:86] duration metric: took 381.195592ms for pod "kube-controller-manager-default-k8s-diff-port-505851" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:03:57.489112  477441 pod_ready.go:83] waiting for pod "kube-proxy-27pnt" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:03:57.889231  477441 pod_ready.go:94] pod "kube-proxy-27pnt" is "Ready"
	I1013 22:03:57.889265  477441 pod_ready.go:86] duration metric: took 400.118576ms for pod "kube-proxy-27pnt" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:03:58.090857  477441 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-505851" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:03:58.489541  477441 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-505851" is "Ready"
	I1013 22:03:58.489575  477441 pod_ready.go:86] duration metric: took 398.683022ms for pod "kube-scheduler-default-k8s-diff-port-505851" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:03:58.489590  477441 pod_ready.go:40] duration metric: took 1.605113886s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 22:03:58.540564  477441 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1013 22:03:58.542840  477441 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-505851" cluster and "default" namespace by default
	I1013 22:03:58.039166  487583 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/auto-200102/proxy-client.crt ...
	I1013 22:03:58.039192  487583 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/auto-200102/proxy-client.crt: {Name:mkee870077425074c906eaa1754c70c39dd1609a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:03:58.039371  487583 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/auto-200102/proxy-client.key ...
	I1013 22:03:58.039389  487583 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/auto-200102/proxy-client.key: {Name:mk655d2a19977110d0e11b0a3d6a87cbec7dcec7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:03:58.039571  487583 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/230929.pem (1338 bytes)
	W1013 22:03:58.039607  487583 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-226873/.minikube/certs/230929_empty.pem, impossibly tiny 0 bytes
	I1013 22:03:58.039615  487583 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca-key.pem (1675 bytes)
	I1013 22:03:58.039634  487583 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem (1078 bytes)
	I1013 22:03:58.039654  487583 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/cert.pem (1123 bytes)
	I1013 22:03:58.039687  487583 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/key.pem (1679 bytes)
	I1013 22:03:58.039725  487583 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/files/etc/ssl/certs/2309292.pem (1708 bytes)
	I1013 22:03:58.040290  487583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1013 22:03:58.059358  487583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1013 22:03:58.084643  487583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1013 22:03:58.109884  487583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1013 22:03:58.133290  487583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/auto-200102/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1013 22:03:58.153882  487583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/auto-200102/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1013 22:03:58.175490  487583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/auto-200102/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1013 22:03:58.194357  487583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/auto-200102/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1013 22:03:58.213174  487583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1013 22:03:58.234011  487583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/certs/230929.pem --> /usr/share/ca-certificates/230929.pem (1338 bytes)
	I1013 22:03:58.253249  487583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/files/etc/ssl/certs/2309292.pem --> /usr/share/ca-certificates/2309292.pem (1708 bytes)
	I1013 22:03:58.273422  487583 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1013 22:03:58.288505  487583 ssh_runner.go:195] Run: openssl version
	I1013 22:03:58.295475  487583 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2309292.pem && ln -fs /usr/share/ca-certificates/2309292.pem /etc/ssl/certs/2309292.pem"
	I1013 22:03:58.305197  487583 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2309292.pem
	I1013 22:03:58.309361  487583 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 13 21:24 /usr/share/ca-certificates/2309292.pem
	I1013 22:03:58.309420  487583 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2309292.pem
	I1013 22:03:58.350087  487583 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2309292.pem /etc/ssl/certs/3ec20f2e.0"
	I1013 22:03:58.359841  487583 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1013 22:03:58.369253  487583 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:03:58.373660  487583 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 13 21:18 /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:03:58.373713  487583 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:03:58.410477  487583 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1013 22:03:58.420091  487583 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/230929.pem && ln -fs /usr/share/ca-certificates/230929.pem /etc/ssl/certs/230929.pem"
	I1013 22:03:58.429157  487583 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/230929.pem
	I1013 22:03:58.433163  487583 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 13 21:24 /usr/share/ca-certificates/230929.pem
	I1013 22:03:58.433218  487583 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/230929.pem
	I1013 22:03:58.472582  487583 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/230929.pem /etc/ssl/certs/51391683.0"
	I1013 22:03:58.482625  487583 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1013 22:03:58.486464  487583 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1013 22:03:58.486515  487583 kubeadm.go:400] StartCluster: {Name:auto-200102 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-200102 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 22:03:58.486602  487583 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 22:03:58.486644  487583 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 22:03:58.518129  487583 cri.go:89] found id: ""
	I1013 22:03:58.518203  487583 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1013 22:03:58.528420  487583 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1013 22:03:58.537447  487583 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1013 22:03:58.537509  487583 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1013 22:03:58.545975  487583 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1013 22:03:58.546041  487583 kubeadm.go:157] found existing configuration files:
	
	I1013 22:03:58.546097  487583 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1013 22:03:58.555032  487583 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1013 22:03:58.555095  487583 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1013 22:03:58.566394  487583 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1013 22:03:58.576170  487583 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1013 22:03:58.576233  487583 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1013 22:03:58.585668  487583 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1013 22:03:58.594090  487583 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1013 22:03:58.594152  487583 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1013 22:03:58.603321  487583 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1013 22:03:58.612260  487583 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1013 22:03:58.612342  487583 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1013 22:03:58.620752  487583 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1013 22:03:58.668965  487583 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1013 22:03:58.669079  487583 kubeadm.go:318] [preflight] Running pre-flight checks
	I1013 22:03:58.694495  487583 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1013 22:03:58.694619  487583 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1013 22:03:58.694675  487583 kubeadm.go:318] OS: Linux
	I1013 22:03:58.694719  487583 kubeadm.go:318] CGROUPS_CPU: enabled
	I1013 22:03:58.694794  487583 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1013 22:03:58.694890  487583 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1013 22:03:58.694986  487583 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1013 22:03:58.695084  487583 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1013 22:03:58.695165  487583 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1013 22:03:58.695250  487583 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1013 22:03:58.695324  487583 kubeadm.go:318] CGROUPS_IO: enabled
	I1013 22:03:58.763338  487583 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1013 22:03:58.763475  487583 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1013 22:03:58.763592  487583 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1013 22:03:58.771259  487583 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1013 22:03:55.287236  484490 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.000886624s
	I1013 22:03:55.290383  484490 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1013 22:03:55.290549  484490 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1013 22:03:55.290705  484490 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1013 22:03:55.290828  484490 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1013 22:03:56.796341  484490 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.505897974s
	I1013 22:03:58.031333  484490 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.740905764s
	I1013 22:03:59.792828  484490 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.502441565s
	I1013 22:03:59.807208  484490 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1013 22:03:59.821040  484490 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1013 22:03:59.832233  484490 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1013 22:03:59.832557  484490 kubeadm.go:318] [mark-control-plane] Marking the node newest-cni-843554 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1013 22:03:59.841625  484490 kubeadm.go:318] [bootstrap-token] Using token: qujhya.lp2l688dgho08i02
	I1013 22:03:59.842861  484490 out.go:252]   - Configuring RBAC rules ...
	I1013 22:03:59.843045  484490 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1013 22:03:59.848430  484490 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1013 22:03:59.854539  484490 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1013 22:03:59.857466  484490 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1013 22:03:59.860477  484490 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1013 22:03:59.863374  484490 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1013 22:04:00.199711  484490 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1013 22:04:00.617233  484490 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1013 22:04:01.200369  484490 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1013 22:04:01.201630  484490 kubeadm.go:318] 
	I1013 22:04:01.201749  484490 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1013 22:04:01.201760  484490 kubeadm.go:318] 
	I1013 22:04:01.201861  484490 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1013 22:04:01.201877  484490 kubeadm.go:318] 
	I1013 22:04:01.201913  484490 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1013 22:04:01.201976  484490 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1013 22:04:01.202065  484490 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1013 22:04:01.202074  484490 kubeadm.go:318] 
	I1013 22:04:01.202147  484490 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1013 22:04:01.202158  484490 kubeadm.go:318] 
	I1013 22:04:01.202214  484490 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1013 22:04:01.202224  484490 kubeadm.go:318] 
	I1013 22:04:01.202327  484490 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1013 22:04:01.202403  484490 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1013 22:04:01.202463  484490 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1013 22:04:01.202469  484490 kubeadm.go:318] 
	I1013 22:04:01.202573  484490 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1013 22:04:01.202685  484490 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1013 22:04:01.202692  484490 kubeadm.go:318] 
	I1013 22:04:01.202812  484490 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token qujhya.lp2l688dgho08i02 \
	I1013 22:04:01.202954  484490 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:00bf5d6d0f4ef7bd9334b23e90d8fd2b7e452995fe6a96d4fd7aebd6540a9956 \
	I1013 22:04:01.202980  484490 kubeadm.go:318] 	--control-plane 
	I1013 22:04:01.202984  484490 kubeadm.go:318] 
	I1013 22:04:01.203115  484490 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1013 22:04:01.203129  484490 kubeadm.go:318] 
	I1013 22:04:01.203248  484490 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token qujhya.lp2l688dgho08i02 \
	I1013 22:04:01.203347  484490 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:00bf5d6d0f4ef7bd9334b23e90d8fd2b7e452995fe6a96d4fd7aebd6540a9956 
	I1013 22:04:01.207191  484490 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1013 22:04:01.207356  484490 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1013 22:04:01.207384  484490 cni.go:84] Creating CNI manager for ""
	I1013 22:04:01.207398  484490 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 22:04:01.209351  484490 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1013 22:03:57.398768  476377 node_ready.go:57] node "embed-certs-521669" has "Ready":"False" status (will retry)
	W1013 22:03:59.897919  476377 node_ready.go:57] node "embed-certs-521669" has "Ready":"False" status (will retry)
	I1013 22:03:58.773424  487583 out.go:252]   - Generating certificates and keys ...
	I1013 22:03:58.773529  487583 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1013 22:03:58.773601  487583 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1013 22:03:58.929208  487583 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1013 22:03:59.149233  487583 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1013 22:03:59.436094  487583 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1013 22:03:59.839862  487583 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1013 22:04:00.016670  487583 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1013 22:04:00.016898  487583 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [auto-200102 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1013 22:04:00.136464  487583 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1013 22:04:00.136727  487583 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [auto-200102 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1013 22:04:00.475165  487583 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1013 22:04:01.108386  487583 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1013 22:04:01.372861  487583 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1013 22:04:01.373005  487583 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1013 22:04:01.483598  487583 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1013 22:04:01.771328  487583 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1013 22:04:02.020984  487583 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1013 22:04:02.485429  487583 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1013 22:04:02.805340  487583 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1013 22:04:02.805811  487583 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1013 22:04:02.811547  487583 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1013 22:04:02.813104  487583 out.go:252]   - Booting up control plane ...
	I1013 22:04:02.813188  487583 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1013 22:04:02.813249  487583 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1013 22:04:02.813887  487583 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1013 22:04:02.840786  487583 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1013 22:04:02.840946  487583 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1013 22:04:02.849340  487583 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1013 22:04:02.849635  487583 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1013 22:04:02.849700  487583 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1013 22:04:02.956750  487583 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1013 22:04:02.956896  487583 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1013 22:04:01.210712  484490 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1013 22:04:01.215507  484490 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1013 22:04:01.215531  484490 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1013 22:04:01.229960  484490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1013 22:04:01.461985  484490 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1013 22:04:01.462089  484490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:04:01.462144  484490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-843554 minikube.k8s.io/updated_at=2025_10_13T22_04_01_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=18273cc699fc357fbf4f93654efe4966698a9f22 minikube.k8s.io/name=newest-cni-843554 minikube.k8s.io/primary=true
	I1013 22:04:01.546592  484490 ops.go:34] apiserver oom_adj: -16
	I1013 22:04:01.546647  484490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:04:02.047002  484490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:04:02.546699  484490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:04:03.047144  484490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:04:03.546720  484490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:04:04.047188  484490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:04:04.546953  484490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:04:05.046752  484490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:04:05.547270  484490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:04:06.046904  484490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:04:06.132724  484490 kubeadm.go:1113] duration metric: took 4.670694614s to wait for elevateKubeSystemPrivileges
	I1013 22:04:06.132764  484490 kubeadm.go:402] duration metric: took 16.361347762s to StartCluster
	I1013 22:04:06.132788  484490 settings.go:142] acquiring lock: {Name:mk13008e3b2fce0e368bddbf00d43b8340210d33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:04:06.132880  484490 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-226873/kubeconfig
	I1013 22:04:06.134776  484490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/kubeconfig: {Name:mk2f336b13d09ff6e6da9e86905651541ce51ca0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:04:06.135092  484490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1013 22:04:06.135107  484490 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1013 22:04:06.135199  484490 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1013 22:04:06.135293  484490 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-843554"
	I1013 22:04:06.135310  484490 config.go:182] Loaded profile config "newest-cni-843554": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:04:06.135321  484490 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-843554"
	I1013 22:04:06.135361  484490 host.go:66] Checking if "newest-cni-843554" exists ...
	I1013 22:04:06.135314  484490 addons.go:69] Setting default-storageclass=true in profile "newest-cni-843554"
	I1013 22:04:06.135419  484490 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-843554"
	I1013 22:04:06.135832  484490 cli_runner.go:164] Run: docker container inspect newest-cni-843554 --format={{.State.Status}}
	I1013 22:04:06.136032  484490 cli_runner.go:164] Run: docker container inspect newest-cni-843554 --format={{.State.Status}}
	I1013 22:04:06.137115  484490 out.go:179] * Verifying Kubernetes components...
	I1013 22:04:06.141173  484490 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:04:06.161027  484490 addons.go:238] Setting addon default-storageclass=true in "newest-cni-843554"
	I1013 22:04:06.161078  484490 host.go:66] Checking if "newest-cni-843554" exists ...
	I1013 22:04:06.161553  484490 cli_runner.go:164] Run: docker container inspect newest-cni-843554 --format={{.State.Status}}
	I1013 22:04:06.162361  484490 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1013 22:04:02.397017  476377 node_ready.go:57] node "embed-certs-521669" has "Ready":"False" status (will retry)
	W1013 22:04:04.897852  476377 node_ready.go:57] node "embed-certs-521669" has "Ready":"False" status (will retry)
	I1013 22:04:06.164230  484490 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 22:04:06.164253  484490 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1013 22:04:06.164305  484490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-843554
	I1013 22:04:06.198126  484490 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1013 22:04:06.198154  484490 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1013 22:04:06.198219  484490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-843554
	I1013 22:04:06.207029  484490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/newest-cni-843554/id_rsa Username:docker}
	I1013 22:04:06.231679  484490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/newest-cni-843554/id_rsa Username:docker}
	I1013 22:04:06.254188  484490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1013 22:04:06.316606  484490 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 22:04:06.362597  484490 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1013 22:04:06.362782  484490 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 22:04:06.481314  484490 start.go:976] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1013 22:04:06.483112  484490 api_server.go:52] waiting for apiserver process to appear ...
	I1013 22:04:06.483175  484490 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 22:04:06.754622  484490 api_server.go:72] duration metric: took 619.477155ms to wait for apiserver process to appear ...
	I1013 22:04:06.754648  484490 api_server.go:88] waiting for apiserver healthz status ...
	I1013 22:04:06.754678  484490 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1013 22:04:06.756049  484490 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	
	
	==> CRI-O <==
	Oct 13 22:03:55 default-k8s-diff-port-505851 crio[780]: time="2025-10-13T22:03:55.594116392Z" level=info msg="Starting container: 79bb16be21eb9eff7cdbffa04d4edacb15263a66902941c2f6687d1b459a92fb" id=1a7470eb-54e3-4d4b-9076-0023a8b0672a name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 22:03:55 default-k8s-diff-port-505851 crio[780]: time="2025-10-13T22:03:55.597264255Z" level=info msg="Started container" PID=1872 containerID=79bb16be21eb9eff7cdbffa04d4edacb15263a66902941c2f6687d1b459a92fb description=kube-system/coredns-66bc5c9577-5x8dn/coredns id=1a7470eb-54e3-4d4b-9076-0023a8b0672a name=/runtime.v1.RuntimeService/StartContainer sandboxID=e44b1e1b1cdeee90d8ffd756c8e17ca465ef4e349dc4f48f8d4a9a7a83709827
	Oct 13 22:03:59 default-k8s-diff-port-505851 crio[780]: time="2025-10-13T22:03:59.034184996Z" level=info msg="Running pod sandbox: default/busybox/POD" id=76b37117-5f8d-41fa-abbe-5a5061a6c386 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 13 22:03:59 default-k8s-diff-port-505851 crio[780]: time="2025-10-13T22:03:59.034271241Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:03:59 default-k8s-diff-port-505851 crio[780]: time="2025-10-13T22:03:59.041587793Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:75c4150fe5ba91c7a795f2aa11019916d49725cc0ecef7cc3898d7974b600d1c UID:1f8454a6-017d-4521-b0a5-2f14f3d912b2 NetNS:/var/run/netns/ad57aeec-b043-4481-a7df-a7087dd3a22c Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00015c468}] Aliases:map[]}"
	Oct 13 22:03:59 default-k8s-diff-port-505851 crio[780]: time="2025-10-13T22:03:59.04162983Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 13 22:03:59 default-k8s-diff-port-505851 crio[780]: time="2025-10-13T22:03:59.054609241Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:75c4150fe5ba91c7a795f2aa11019916d49725cc0ecef7cc3898d7974b600d1c UID:1f8454a6-017d-4521-b0a5-2f14f3d912b2 NetNS:/var/run/netns/ad57aeec-b043-4481-a7df-a7087dd3a22c Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00015c468}] Aliases:map[]}"
	Oct 13 22:03:59 default-k8s-diff-port-505851 crio[780]: time="2025-10-13T22:03:59.05477235Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 13 22:03:59 default-k8s-diff-port-505851 crio[780]: time="2025-10-13T22:03:59.055742899Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 13 22:03:59 default-k8s-diff-port-505851 crio[780]: time="2025-10-13T22:03:59.056873271Z" level=info msg="Ran pod sandbox 75c4150fe5ba91c7a795f2aa11019916d49725cc0ecef7cc3898d7974b600d1c with infra container: default/busybox/POD" id=76b37117-5f8d-41fa-abbe-5a5061a6c386 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 13 22:03:59 default-k8s-diff-port-505851 crio[780]: time="2025-10-13T22:03:59.058173882Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=38dd5784-bf9c-4b8c-b8bb-f198ca3a301c name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:03:59 default-k8s-diff-port-505851 crio[780]: time="2025-10-13T22:03:59.058320525Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=38dd5784-bf9c-4b8c-b8bb-f198ca3a301c name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:03:59 default-k8s-diff-port-505851 crio[780]: time="2025-10-13T22:03:59.05836629Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=38dd5784-bf9c-4b8c-b8bb-f198ca3a301c name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:03:59 default-k8s-diff-port-505851 crio[780]: time="2025-10-13T22:03:59.059201785Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f0bd327c-ae14-499d-8c17-883a139a9663 name=/runtime.v1.ImageService/PullImage
	Oct 13 22:03:59 default-k8s-diff-port-505851 crio[780]: time="2025-10-13T22:03:59.063455653Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 13 22:03:59 default-k8s-diff-port-505851 crio[780]: time="2025-10-13T22:03:59.852486585Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=f0bd327c-ae14-499d-8c17-883a139a9663 name=/runtime.v1.ImageService/PullImage
	Oct 13 22:03:59 default-k8s-diff-port-505851 crio[780]: time="2025-10-13T22:03:59.853311199Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=d1096a9d-7a91-474d-8b50-bc61a9c595cd name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:03:59 default-k8s-diff-port-505851 crio[780]: time="2025-10-13T22:03:59.85508511Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=4ae34817-5356-4aeb-8511-fbc2bf7f6e6f name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:03:59 default-k8s-diff-port-505851 crio[780]: time="2025-10-13T22:03:59.858590142Z" level=info msg="Creating container: default/busybox/busybox" id=8e206016-be9e-4767-9a6c-53b969359776 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:03:59 default-k8s-diff-port-505851 crio[780]: time="2025-10-13T22:03:59.859468828Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:03:59 default-k8s-diff-port-505851 crio[780]: time="2025-10-13T22:03:59.864097755Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:03:59 default-k8s-diff-port-505851 crio[780]: time="2025-10-13T22:03:59.864740404Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:03:59 default-k8s-diff-port-505851 crio[780]: time="2025-10-13T22:03:59.895296733Z" level=info msg="Created container a7d15f5de3a873d7df91dab4f6c2395e3ce2bf9aa5d94546c896bee50c2d1fd9: default/busybox/busybox" id=8e206016-be9e-4767-9a6c-53b969359776 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:03:59 default-k8s-diff-port-505851 crio[780]: time="2025-10-13T22:03:59.896142718Z" level=info msg="Starting container: a7d15f5de3a873d7df91dab4f6c2395e3ce2bf9aa5d94546c896bee50c2d1fd9" id=be1029c9-a0f3-4566-a830-477edc7236cf name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 22:03:59 default-k8s-diff-port-505851 crio[780]: time="2025-10-13T22:03:59.898519759Z" level=info msg="Started container" PID=1948 containerID=a7d15f5de3a873d7df91dab4f6c2395e3ce2bf9aa5d94546c896bee50c2d1fd9 description=default/busybox/busybox id=be1029c9-a0f3-4566-a830-477edc7236cf name=/runtime.v1.RuntimeService/StartContainer sandboxID=75c4150fe5ba91c7a795f2aa11019916d49725cc0ecef7cc3898d7974b600d1c
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	a7d15f5de3a87       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   75c4150fe5ba9       busybox                                                default
	79bb16be21eb9       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      11 seconds ago      Running             coredns                   0                   e44b1e1b1cdee       coredns-66bc5c9577-5x8dn                               kube-system
	c6903ccb8921a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 seconds ago      Running             storage-provisioner       0                   71626217f3289       storage-provisioner                                    kube-system
	c6397669fc053       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      22 seconds ago      Running             kindnet-cni               0                   c700dc0cd91e3       kindnet-m5whc                                          kube-system
	0ad763ab35055       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      22 seconds ago      Running             kube-proxy                0                   8da98947333a3       kube-proxy-27pnt                                       kube-system
	93591942ea80d       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      33 seconds ago      Running             kube-controller-manager   0                   25931a16691d2       kube-controller-manager-default-k8s-diff-port-505851   kube-system
	589a8803d1551       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      33 seconds ago      Running             etcd                      0                   0f11b258e5eb7       etcd-default-k8s-diff-port-505851                      kube-system
	c5465c03f86cb       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      33 seconds ago      Running             kube-apiserver            0                   85e63a506e88a       kube-apiserver-default-k8s-diff-port-505851            kube-system
	106c626f18a0a       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      33 seconds ago      Running             kube-scheduler            0                   564b10f459434       kube-scheduler-default-k8s-diff-port-505851            kube-system
	
	
	==> coredns [79bb16be21eb9eff7cdbffa04d4edacb15263a66902941c2f6687d1b459a92fb] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:33875 - 5162 "HINFO IN 2023120024352108094.8950436020824501996. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.067503982s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-505851
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-505851
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=18273cc699fc357fbf4f93654efe4966698a9f22
	                    minikube.k8s.io/name=default-k8s-diff-port-505851
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_13T22_03_39_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 13 Oct 2025 22:03:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-505851
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 13 Oct 2025 22:03:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 13 Oct 2025 22:03:58 +0000   Mon, 13 Oct 2025 22:03:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 13 Oct 2025 22:03:58 +0000   Mon, 13 Oct 2025 22:03:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 13 Oct 2025 22:03:58 +0000   Mon, 13 Oct 2025 22:03:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 13 Oct 2025 22:03:58 +0000   Mon, 13 Oct 2025 22:03:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-505851
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863444Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863444Ki
	  pods:               110
	System Info:
	  Machine ID:                 66f4c9907daa2bafbf78246f68ed04ba
	  System UUID:                ff284ab0-6ab9-4288-9f40-64d181496243
	  Boot ID:                    981b04c4-08c1-4321-af44-0fa889f5f1d8
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-66bc5c9577-5x8dn                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     23s
	  kube-system                 etcd-default-k8s-diff-port-505851                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         29s
	  kube-system                 kindnet-m5whc                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      24s
	  kube-system                 kube-apiserver-default-k8s-diff-port-505851             250m (3%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-505851    200m (2%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-proxy-27pnt                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	  kube-system                 kube-scheduler-default-k8s-diff-port-505851             100m (1%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 22s                kube-proxy       
	  Normal  NodeHasSufficientMemory  34s (x8 over 34s)  kubelet          Node default-k8s-diff-port-505851 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    34s (x8 over 34s)  kubelet          Node default-k8s-diff-port-505851 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     34s (x8 over 34s)  kubelet          Node default-k8s-diff-port-505851 status is now: NodeHasSufficientPID
	  Normal  Starting                 29s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  29s                kubelet          Node default-k8s-diff-port-505851 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29s                kubelet          Node default-k8s-diff-port-505851 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29s                kubelet          Node default-k8s-diff-port-505851 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           25s                node-controller  Node default-k8s-diff-port-505851 event: Registered Node default-k8s-diff-port-505851 in Controller
	  Normal  NodeReady                12s                kubelet          Node default-k8s-diff-port-505851 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.099627] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.027042] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.612816] kauditd_printk_skb: 47 callbacks suppressed
	[Oct13 21:21] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +1.026013] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +1.023870] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +1.023931] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +1.023844] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +1.023883] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +2.047734] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +4.031606] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +8.511203] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[ +16.382178] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[Oct13 21:22] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	
	
	==> etcd [589a8803d1551bc49243cd5633d97fc06543cf9623e286f8d7f869e394b44a63] <==
	{"level":"info","ts":"2025-10-13T22:03:44.083038Z","caller":"traceutil/trace.go:172","msg":"trace[132329628] transaction","detail":"{read_only:false; response_revision:356; number_of_response:1; }","duration":"197.580897ms","start":"2025-10-13T22:03:43.885448Z","end":"2025-10-13T22:03:44.083029Z","steps":["trace[132329628] 'process raft request'  (duration: 197.40083ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-13T22:03:44.083052Z","caller":"traceutil/trace.go:172","msg":"trace[10507392] transaction","detail":"{read_only:false; response_revision:358; number_of_response:1; }","duration":"195.992582ms","start":"2025-10-13T22:03:43.887045Z","end":"2025-10-13T22:03:44.083037Z","steps":["trace[10507392] 'process raft request'  (duration: 195.866888ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-13T22:03:44.083081Z","caller":"traceutil/trace.go:172","msg":"trace[1100843818] transaction","detail":"{read_only:false; response_revision:359; number_of_response:1; }","duration":"192.004699ms","start":"2025-10-13T22:03:43.891068Z","end":"2025-10-13T22:03:44.083073Z","steps":["trace[1100843818] 'process raft request'  (duration: 191.871877ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-13T22:03:44.083148Z","caller":"traceutil/trace.go:172","msg":"trace[1853948850] transaction","detail":"{read_only:false; response_revision:360; number_of_response:1; }","duration":"188.587792ms","start":"2025-10-13T22:03:43.894550Z","end":"2025-10-13T22:03:44.083138Z","steps":["trace[1853948850] 'process raft request'  (duration: 188.428921ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-13T22:03:44.083205Z","caller":"traceutil/trace.go:172","msg":"trace[82325412] transaction","detail":"{read_only:false; response_revision:357; number_of_response:1; }","duration":"196.997673ms","start":"2025-10-13T22:03:43.886201Z","end":"2025-10-13T22:03:44.083199Z","steps":["trace[82325412] 'process raft request'  (duration: 196.684623ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-13T22:03:44.082748Z","caller":"traceutil/trace.go:172","msg":"trace[13741974] transaction","detail":"{read_only:false; response_revision:353; number_of_response:1; }","duration":"198.263271ms","start":"2025-10-13T22:03:43.884459Z","end":"2025-10-13T22:03:44.082722Z","steps":["trace[13741974] 'process raft request'  (duration: 125.208688ms)","trace[13741974] 'compare'  (duration: 72.904271ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-13T22:03:44.083490Z","caller":"traceutil/trace.go:172","msg":"trace[571020960] transaction","detail":"{read_only:false; response_revision:354; number_of_response:1; }","duration":"198.99438ms","start":"2025-10-13T22:03:43.884485Z","end":"2025-10-13T22:03:44.083480Z","steps":["trace[571020960] 'process raft request'  (duration: 198.24678ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-13T22:03:44.083509Z","caller":"traceutil/trace.go:172","msg":"trace[515550028] transaction","detail":"{read_only:false; response_revision:355; number_of_response:1; }","duration":"198.38123ms","start":"2025-10-13T22:03:43.885115Z","end":"2025-10-13T22:03:44.083497Z","steps":["trace[515550028] 'process raft request'  (duration: 197.692716ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-13T22:03:44.298217Z","caller":"traceutil/trace.go:172","msg":"trace[1465340726] linearizableReadLoop","detail":"{readStateIndex:375; appliedIndex:375; }","duration":"128.229812ms","start":"2025-10-13T22:03:44.169951Z","end":"2025-10-13T22:03:44.298180Z","steps":["trace[1465340726] 'read index received'  (duration: 128.218813ms)","trace[1465340726] 'applied index is now lower than readState.Index'  (duration: 9.01µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-13T22:03:44.336578Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"166.609438ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/kube-system/kube-proxy\" limit:1 ","response":"range_response_count:1 size:2878"}
	{"level":"info","ts":"2025-10-13T22:03:44.336680Z","caller":"traceutil/trace.go:172","msg":"trace[1900965014] range","detail":"{range_begin:/registry/daemonsets/kube-system/kube-proxy; range_end:; response_count:1; response_revision:366; }","duration":"166.717471ms","start":"2025-10-13T22:03:44.169933Z","end":"2025-10-13T22:03:44.336650Z","steps":["trace[1900965014] 'agreement among raft nodes before linearized reading'  (duration: 128.375032ms)","trace[1900965014] 'range keys from in-memory index tree'  (duration: 38.178384ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-13T22:03:44.336901Z","caller":"traceutil/trace.go:172","msg":"trace[2016884] transaction","detail":"{read_only:false; response_revision:371; number_of_response:1; }","duration":"159.343968ms","start":"2025-10-13T22:03:44.177542Z","end":"2025-10-13T22:03:44.336886Z","steps":["trace[2016884] 'process raft request'  (duration: 159.305036ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-13T22:03:44.337107Z","caller":"traceutil/trace.go:172","msg":"trace[1806686925] transaction","detail":"{read_only:false; response_revision:368; number_of_response:1; }","duration":"168.975411ms","start":"2025-10-13T22:03:44.168097Z","end":"2025-10-13T22:03:44.337072Z","steps":["trace[1806686925] 'process raft request'  (duration: 168.589219ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-13T22:03:44.337144Z","caller":"traceutil/trace.go:172","msg":"trace[670942265] transaction","detail":"{read_only:false; response_revision:367; number_of_response:1; }","duration":"172.297633ms","start":"2025-10-13T22:03:44.164839Z","end":"2025-10-13T22:03:44.337137Z","steps":["trace[670942265] 'process raft request'  (duration: 133.337901ms)","trace[670942265] 'compare'  (duration: 38.338365ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-13T22:03:44.337175Z","caller":"traceutil/trace.go:172","msg":"trace[1216902544] transaction","detail":"{read_only:false; response_revision:369; number_of_response:1; }","duration":"166.495216ms","start":"2025-10-13T22:03:44.170655Z","end":"2025-10-13T22:03:44.337150Z","steps":["trace[1216902544] 'process raft request'  (duration: 166.083858ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-13T22:03:44.337378Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"161.908119ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/kube-system/kindnet\" limit:1 ","response":"range_response_count:1 size:4681"}
	{"level":"info","ts":"2025-10-13T22:03:44.338228Z","caller":"traceutil/trace.go:172","msg":"trace[724831463] range","detail":"{range_begin:/registry/daemonsets/kube-system/kindnet; range_end:; response_count:1; response_revision:371; }","duration":"162.769694ms","start":"2025-10-13T22:03:44.175449Z","end":"2025-10-13T22:03:44.338219Z","steps":["trace[724831463] 'agreement among raft nodes before linearized reading'  (duration: 161.797585ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-13T22:03:44.337451Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"139.646542ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-system/coredns\" limit:1 ","response":"range_response_count:1 size:693"}
	{"level":"info","ts":"2025-10-13T22:03:44.338402Z","caller":"traceutil/trace.go:172","msg":"trace[1764955342] range","detail":"{range_begin:/registry/configmaps/kube-system/coredns; range_end:; response_count:1; response_revision:371; }","duration":"140.606922ms","start":"2025-10-13T22:03:44.197787Z","end":"2025-10-13T22:03:44.338394Z","steps":["trace[1764955342] 'agreement among raft nodes before linearized reading'  (duration: 139.582446ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-13T22:03:44.337502Z","caller":"traceutil/trace.go:172","msg":"trace[1658504067] transaction","detail":"{read_only:false; response_revision:370; number_of_response:1; }","duration":"161.792147ms","start":"2025-10-13T22:03:44.175699Z","end":"2025-10-13T22:03:44.337492Z","steps":["trace[1658504067] 'process raft request'  (duration: 161.086846ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-13T22:03:44.446790Z","caller":"traceutil/trace.go:172","msg":"trace[901794332] transaction","detail":"{read_only:false; response_revision:373; number_of_response:1; }","duration":"100.549675ms","start":"2025-10-13T22:03:44.346216Z","end":"2025-10-13T22:03:44.446766Z","steps":["trace[901794332] 'process raft request'  (duration: 100.364593ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-13T22:03:44.447215Z","caller":"traceutil/trace.go:172","msg":"trace[1325047650] transaction","detail":"{read_only:false; response_revision:372; number_of_response:1; }","duration":"103.899193ms","start":"2025-10-13T22:03:44.343294Z","end":"2025-10-13T22:03:44.447193Z","steps":["trace[1325047650] 'process raft request'  (duration: 99.595268ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-13T22:03:52.362768Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"129.618788ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638355941725538298 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-scheduler-default-k8s-diff-port-505851\" mod_revision:424 > success:<request_put:<key:\"/registry/pods/kube-system/kube-scheduler-default-k8s-diff-port-505851\" value_size:4754 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-scheduler-default-k8s-diff-port-505851\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-10-13T22:03:52.362925Z","caller":"traceutil/trace.go:172","msg":"trace[1566625911] transaction","detail":"{read_only:false; response_revision:425; number_of_response:1; }","duration":"266.509563ms","start":"2025-10-13T22:03:52.096384Z","end":"2025-10-13T22:03:52.362894Z","steps":["trace[1566625911] 'process raft request'  (duration: 136.273407ms)","trace[1566625911] 'compare'  (duration: 129.505336ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-13T22:03:53.125822Z","caller":"traceutil/trace.go:172","msg":"trace[518650500] transaction","detail":"{read_only:false; response_revision:426; number_of_response:1; }","duration":"246.943132ms","start":"2025-10-13T22:03:52.878861Z","end":"2025-10-13T22:03:53.125804Z","steps":["trace[518650500] 'process raft request'  (duration: 246.762955ms)"],"step_count":1}
	
	
	==> kernel <==
	 22:04:07 up  1:46,  0 user,  load average: 4.45, 3.64, 5.86
	Linux default-k8s-diff-port-505851 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c6397669fc053d7a14f730971d8773d34b34fbfdec7d19e1a1b3ce7c5fd786da] <==
	I1013 22:03:44.814094       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1013 22:03:44.814508       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1013 22:03:44.814673       1 main.go:148] setting mtu 1500 for CNI 
	I1013 22:03:44.814696       1 main.go:178] kindnetd IP family: "ipv4"
	I1013 22:03:44.814765       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-13T22:03:45Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1013 22:03:45.020627       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1013 22:03:45.020654       1 controller.go:381] "Waiting for informer caches to sync"
	I1013 22:03:45.020666       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1013 22:03:45.020919       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1013 22:03:45.323037       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1013 22:03:45.323070       1 metrics.go:72] Registering metrics
	I1013 22:03:45.323148       1 controller.go:711] "Syncing nftables rules"
	I1013 22:03:55.021173       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1013 22:03:55.021259       1 main.go:301] handling current node
	I1013 22:04:05.023816       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1013 22:04:05.023865       1 main.go:301] handling current node
	
	
	==> kube-apiserver [c5465c03f86cb168410acae2a9d236f3e17786f8d9ce8cead65dd17fb9dc900e] <==
	E1013 22:03:36.005138       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1013 22:03:36.042101       1 controller.go:667] quota admission added evaluator for: namespaces
	I1013 22:03:36.060553       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1013 22:03:36.061197       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1013 22:03:36.068759       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1013 22:03:36.070463       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1013 22:03:36.208050       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1013 22:03:36.845612       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1013 22:03:36.849618       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1013 22:03:36.849637       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1013 22:03:37.453780       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1013 22:03:37.502783       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1013 22:03:37.549862       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1013 22:03:37.556533       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1013 22:03:37.557730       1 controller.go:667] quota admission added evaluator for: endpoints
	I1013 22:03:37.561908       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1013 22:03:37.886968       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1013 22:03:38.464238       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1013 22:03:38.481183       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1013 22:03:38.494695       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1013 22:03:43.489902       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1013 22:03:43.721860       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1013 22:03:43.726186       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1013 22:03:43.753844       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1013 22:04:05.833485       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8444->192.168.76.1:57462: use of closed network connection
	
	
	==> kube-controller-manager [93591942ea80d9abf42e52692409e023a0d0dac4c37b790e32c94967d305b900] <==
	I1013 22:03:42.883846       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1013 22:03:42.884978       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 22:03:42.885020       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1013 22:03:42.885030       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1013 22:03:42.886042       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1013 22:03:42.886169       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1013 22:03:42.887246       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1013 22:03:42.887266       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1013 22:03:42.887292       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1013 22:03:42.887375       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1013 22:03:42.887407       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1013 22:03:42.887376       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1013 22:03:42.887591       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1013 22:03:42.888041       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1013 22:03:42.889405       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1013 22:03:42.891720       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1013 22:03:42.892983       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1013 22:03:42.893021       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1013 22:03:42.893119       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1013 22:03:42.895564       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1013 22:03:42.899030       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1013 22:03:42.900057       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1013 22:03:42.907291       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1013 22:03:42.914571       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 22:03:57.852678       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [0ad763ab35055749e8d8ed70decd44ba6cde3ce374534acb842e488095ea755a] <==
	I1013 22:03:44.663061       1 server_linux.go:53] "Using iptables proxy"
	I1013 22:03:44.729506       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1013 22:03:44.830560       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1013 22:03:44.830607       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1013 22:03:44.830754       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1013 22:03:44.863536       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1013 22:03:44.863613       1 server_linux.go:132] "Using iptables Proxier"
	I1013 22:03:44.874781       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1013 22:03:44.875407       1 server.go:527] "Version info" version="v1.34.1"
	I1013 22:03:44.875556       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 22:03:44.877858       1 config.go:200] "Starting service config controller"
	I1013 22:03:44.877922       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1013 22:03:44.877984       1 config.go:106] "Starting endpoint slice config controller"
	I1013 22:03:44.878029       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1013 22:03:44.878094       1 config.go:403] "Starting serviceCIDR config controller"
	I1013 22:03:44.878129       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1013 22:03:44.878983       1 config.go:309] "Starting node config controller"
	I1013 22:03:44.880517       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1013 22:03:44.880560       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1013 22:03:44.979084       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1013 22:03:44.979121       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1013 22:03:44.979127       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [106c626f18a0ac375d2f363e270500aafea9a6e3ac2169ff4ee2ecf639f3c0a0] <==
	E1013 22:03:35.905402       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1013 22:03:35.905412       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1013 22:03:35.905418       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1013 22:03:35.905427       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1013 22:03:35.905514       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1013 22:03:35.905556       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1013 22:03:35.905533       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1013 22:03:35.905588       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1013 22:03:35.905728       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1013 22:03:36.717281       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1013 22:03:36.748543       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1013 22:03:36.794230       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1013 22:03:36.805828       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1013 22:03:36.885290       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1013 22:03:36.886120       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1013 22:03:36.923863       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1013 22:03:36.958788       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1013 22:03:36.962259       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1013 22:03:37.055840       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1013 22:03:37.074186       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1013 22:03:37.123884       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1013 22:03:37.202282       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1013 22:03:37.244661       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1013 22:03:37.252874       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1013 22:03:39.600960       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 13 22:03:39 default-k8s-diff-port-505851 kubelet[1346]: I1013 22:03:39.390816    1346 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-default-k8s-diff-port-505851" podStartSLOduration=2.390789189 podStartE2EDuration="2.390789189s" podCreationTimestamp="2025-10-13 22:03:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 22:03:39.378150298 +0000 UTC m=+1.176222655" watchObservedRunningTime="2025-10-13 22:03:39.390789189 +0000 UTC m=+1.188861547"
	Oct 13 22:03:39 default-k8s-diff-port-505851 kubelet[1346]: I1013 22:03:39.404526    1346 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-default-k8s-diff-port-505851" podStartSLOduration=1.404506546 podStartE2EDuration="1.404506546s" podCreationTimestamp="2025-10-13 22:03:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 22:03:39.391651661 +0000 UTC m=+1.189724018" watchObservedRunningTime="2025-10-13 22:03:39.404506546 +0000 UTC m=+1.202578883"
	Oct 13 22:03:39 default-k8s-diff-port-505851 kubelet[1346]: I1013 22:03:39.404705    1346 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-default-k8s-diff-port-505851" podStartSLOduration=1.4046944319999999 podStartE2EDuration="1.404694432s" podCreationTimestamp="2025-10-13 22:03:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 22:03:39.404490008 +0000 UTC m=+1.202562365" watchObservedRunningTime="2025-10-13 22:03:39.404694432 +0000 UTC m=+1.202766789"
	Oct 13 22:03:39 default-k8s-diff-port-505851 kubelet[1346]: I1013 22:03:39.428021    1346 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-default-k8s-diff-port-505851" podStartSLOduration=1.427985064 podStartE2EDuration="1.427985064s" podCreationTimestamp="2025-10-13 22:03:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 22:03:39.4172594 +0000 UTC m=+1.215331752" watchObservedRunningTime="2025-10-13 22:03:39.427985064 +0000 UTC m=+1.226057421"
	Oct 13 22:03:42 default-k8s-diff-port-505851 kubelet[1346]: I1013 22:03:42.881479    1346 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 13 22:03:42 default-k8s-diff-port-505851 kubelet[1346]: I1013 22:03:42.882279    1346 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 13 22:03:44 default-k8s-diff-port-505851 kubelet[1346]: I1013 22:03:44.219809    1346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3cb84f83-962c-4830-bdad-0084bc59a7c4-kube-proxy\") pod \"kube-proxy-27pnt\" (UID: \"3cb84f83-962c-4830-bdad-0084bc59a7c4\") " pod="kube-system/kube-proxy-27pnt"
	Oct 13 22:03:44 default-k8s-diff-port-505851 kubelet[1346]: I1013 22:03:44.219862    1346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3cb84f83-962c-4830-bdad-0084bc59a7c4-xtables-lock\") pod \"kube-proxy-27pnt\" (UID: \"3cb84f83-962c-4830-bdad-0084bc59a7c4\") " pod="kube-system/kube-proxy-27pnt"
	Oct 13 22:03:44 default-k8s-diff-port-505851 kubelet[1346]: I1013 22:03:44.219886    1346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3cb84f83-962c-4830-bdad-0084bc59a7c4-lib-modules\") pod \"kube-proxy-27pnt\" (UID: \"3cb84f83-962c-4830-bdad-0084bc59a7c4\") " pod="kube-system/kube-proxy-27pnt"
	Oct 13 22:03:44 default-k8s-diff-port-505851 kubelet[1346]: I1013 22:03:44.219906    1346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f794ce45-bb06-44ce-beae-bffe3ff9d2c0-xtables-lock\") pod \"kindnet-m5whc\" (UID: \"f794ce45-bb06-44ce-beae-bffe3ff9d2c0\") " pod="kube-system/kindnet-m5whc"
	Oct 13 22:03:44 default-k8s-diff-port-505851 kubelet[1346]: I1013 22:03:44.219930    1346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z46cw\" (UniqueName: \"kubernetes.io/projected/3cb84f83-962c-4830-bdad-0084bc59a7c4-kube-api-access-z46cw\") pod \"kube-proxy-27pnt\" (UID: \"3cb84f83-962c-4830-bdad-0084bc59a7c4\") " pod="kube-system/kube-proxy-27pnt"
	Oct 13 22:03:44 default-k8s-diff-port-505851 kubelet[1346]: I1013 22:03:44.219956    1346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8m8t\" (UniqueName: \"kubernetes.io/projected/f794ce45-bb06-44ce-beae-bffe3ff9d2c0-kube-api-access-q8m8t\") pod \"kindnet-m5whc\" (UID: \"f794ce45-bb06-44ce-beae-bffe3ff9d2c0\") " pod="kube-system/kindnet-m5whc"
	Oct 13 22:03:44 default-k8s-diff-port-505851 kubelet[1346]: I1013 22:03:44.219978    1346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f794ce45-bb06-44ce-beae-bffe3ff9d2c0-lib-modules\") pod \"kindnet-m5whc\" (UID: \"f794ce45-bb06-44ce-beae-bffe3ff9d2c0\") " pod="kube-system/kindnet-m5whc"
	Oct 13 22:03:44 default-k8s-diff-port-505851 kubelet[1346]: I1013 22:03:44.220016    1346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/f794ce45-bb06-44ce-beae-bffe3ff9d2c0-cni-cfg\") pod \"kindnet-m5whc\" (UID: \"f794ce45-bb06-44ce-beae-bffe3ff9d2c0\") " pod="kube-system/kindnet-m5whc"
	Oct 13 22:03:45 default-k8s-diff-port-505851 kubelet[1346]: I1013 22:03:45.407785    1346 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-27pnt" podStartSLOduration=2.407757938 podStartE2EDuration="2.407757938s" podCreationTimestamp="2025-10-13 22:03:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 22:03:45.391879609 +0000 UTC m=+7.189951966" watchObservedRunningTime="2025-10-13 22:03:45.407757938 +0000 UTC m=+7.205830295"
	Oct 13 22:03:45 default-k8s-diff-port-505851 kubelet[1346]: I1013 22:03:45.425238    1346 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-m5whc" podStartSLOduration=2.42521416 podStartE2EDuration="2.42521416s" podCreationTimestamp="2025-10-13 22:03:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 22:03:45.4088915 +0000 UTC m=+7.206963858" watchObservedRunningTime="2025-10-13 22:03:45.42521416 +0000 UTC m=+7.223286517"
	Oct 13 22:03:55 default-k8s-diff-port-505851 kubelet[1346]: I1013 22:03:55.185763    1346 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 13 22:03:55 default-k8s-diff-port-505851 kubelet[1346]: I1013 22:03:55.297667    1346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjm9l\" (UniqueName: \"kubernetes.io/projected/2b78411d-d81f-4b88-9a8d-921f7c26ec16-kube-api-access-zjm9l\") pod \"coredns-66bc5c9577-5x8dn\" (UID: \"2b78411d-d81f-4b88-9a8d-921f7c26ec16\") " pod="kube-system/coredns-66bc5c9577-5x8dn"
	Oct 13 22:03:55 default-k8s-diff-port-505851 kubelet[1346]: I1013 22:03:55.297732    1346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmgzw\" (UniqueName: \"kubernetes.io/projected/2b8d56b5-894f-44d4-8b07-d3507c981fc0-kube-api-access-nmgzw\") pod \"storage-provisioner\" (UID: \"2b8d56b5-894f-44d4-8b07-d3507c981fc0\") " pod="kube-system/storage-provisioner"
	Oct 13 22:03:55 default-k8s-diff-port-505851 kubelet[1346]: I1013 22:03:55.297765    1346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2b78411d-d81f-4b88-9a8d-921f7c26ec16-config-volume\") pod \"coredns-66bc5c9577-5x8dn\" (UID: \"2b78411d-d81f-4b88-9a8d-921f7c26ec16\") " pod="kube-system/coredns-66bc5c9577-5x8dn"
	Oct 13 22:03:55 default-k8s-diff-port-505851 kubelet[1346]: I1013 22:03:55.297827    1346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/2b8d56b5-894f-44d4-8b07-d3507c981fc0-tmp\") pod \"storage-provisioner\" (UID: \"2b8d56b5-894f-44d4-8b07-d3507c981fc0\") " pod="kube-system/storage-provisioner"
	Oct 13 22:03:56 default-k8s-diff-port-505851 kubelet[1346]: I1013 22:03:56.406030    1346 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.405984703 podStartE2EDuration="12.405984703s" podCreationTimestamp="2025-10-13 22:03:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 22:03:56.405637039 +0000 UTC m=+18.203709396" watchObservedRunningTime="2025-10-13 22:03:56.405984703 +0000 UTC m=+18.204057052"
	Oct 13 22:03:56 default-k8s-diff-port-505851 kubelet[1346]: I1013 22:03:56.427760    1346 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-5x8dn" podStartSLOduration=12.4277301 podStartE2EDuration="12.4277301s" podCreationTimestamp="2025-10-13 22:03:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 22:03:56.424293001 +0000 UTC m=+18.222365359" watchObservedRunningTime="2025-10-13 22:03:56.4277301 +0000 UTC m=+18.225802456"
	Oct 13 22:03:58 default-k8s-diff-port-505851 kubelet[1346]: I1013 22:03:58.821997    1346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wqgr\" (UniqueName: \"kubernetes.io/projected/1f8454a6-017d-4521-b0a5-2f14f3d912b2-kube-api-access-5wqgr\") pod \"busybox\" (UID: \"1f8454a6-017d-4521-b0a5-2f14f3d912b2\") " pod="default/busybox"
	Oct 13 22:04:05 default-k8s-diff-port-505851 kubelet[1346]: E1013 22:04:05.833466    1346 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:47532->127.0.0.1:39715: write tcp 127.0.0.1:47532->127.0.0.1:39715: write: broken pipe
	
	
	==> storage-provisioner [c6903ccb8921a5a588df3363eea0e7c6f88ec19c3390aed1c97e4f825f93d00e] <==
	I1013 22:03:55.605447       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1013 22:03:55.618138       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1013 22:03:55.618263       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1013 22:03:55.622725       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:03:55.628692       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1013 22:03:55.628857       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1013 22:03:55.629012       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-505851_c25e5f1a-83f8-4d24-9422-4d2a1feda8af!
	I1013 22:03:55.629769       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fbdf3e78-bf34-43b3-8edf-a59e96e32243", APIVersion:"v1", ResourceVersion:"443", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-505851_c25e5f1a-83f8-4d24-9422-4d2a1feda8af became leader
	W1013 22:03:55.631628       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:03:55.636598       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1013 22:03:55.729243       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-505851_c25e5f1a-83f8-4d24-9422-4d2a1feda8af!
	W1013 22:03:57.639978       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:03:57.646060       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:03:59.649343       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:03:59.653438       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:04:01.657037       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:04:01.661169       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:04:03.664869       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:04:03.668743       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:04:05.673535       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:04:05.679890       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:04:07.683890       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:04:07.689409       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-505851 -n default-k8s-diff-port-505851
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-505851 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.68s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.41s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-843554 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-843554 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (284.669624ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:04:07Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-843554 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-843554
helpers_test.go:243: (dbg) docker inspect newest-cni-843554:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d26d618d283e713767743456467a02e98327a589917c92afd56b523d8910224c",
	        "Created": "2025-10-13T22:03:44.63390679Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 485977,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-13T22:03:44.692300985Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:496f866fde942b6f85dc3d23428f27ed11a5cd69b522133c43b8dc97e7575c9e",
	        "ResolvConfPath": "/var/lib/docker/containers/d26d618d283e713767743456467a02e98327a589917c92afd56b523d8910224c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d26d618d283e713767743456467a02e98327a589917c92afd56b523d8910224c/hostname",
	        "HostsPath": "/var/lib/docker/containers/d26d618d283e713767743456467a02e98327a589917c92afd56b523d8910224c/hosts",
	        "LogPath": "/var/lib/docker/containers/d26d618d283e713767743456467a02e98327a589917c92afd56b523d8910224c/d26d618d283e713767743456467a02e98327a589917c92afd56b523d8910224c-json.log",
	        "Name": "/newest-cni-843554",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-843554:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-843554",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d26d618d283e713767743456467a02e98327a589917c92afd56b523d8910224c",
	                "LowerDir": "/var/lib/docker/overlay2/8117176ea132b2feb044432a5a52afef1a59a8eaae543faf8b6d4ada5437690c-init/diff:/var/lib/docker/overlay2/d6236b573fee274727d414fe7dfb0718c2f1a4b8ebed995b4196b3231a8d31a4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8117176ea132b2feb044432a5a52afef1a59a8eaae543faf8b6d4ada5437690c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8117176ea132b2feb044432a5a52afef1a59a8eaae543faf8b6d4ada5437690c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8117176ea132b2feb044432a5a52afef1a59a8eaae543faf8b6d4ada5437690c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-843554",
	                "Source": "/var/lib/docker/volumes/newest-cni-843554/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-843554",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-843554",
	                "name.minikube.sigs.k8s.io": "newest-cni-843554",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4ae9850dd9ae1c41c29db37c1d1f02f6b1d1f88b7e22efdc1e6742272eabefb4",
	            "SandboxKey": "/var/run/docker/netns/4ae9850dd9ae",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33083"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33084"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-843554": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "26:38:ab:06:83:cf",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "57a00d8bcb8b486fb836fa8e6ea8fe1361ab235dd6af3b3af1489d461e67a488",
	                    "EndpointID": "c212cfc4cfb08ef6d32fda45c2e48b829a75f8aeade9f1f5f2736de7d160d653",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-843554",
	                        "d26d618d283e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-843554 -n newest-cni-843554
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-843554 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-843554 logs -n 25: (1.154482814s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p old-k8s-version-534822 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-534822       │ jenkins │ v1.37.0 │ 13 Oct 25 22:02 UTC │ 13 Oct 25 22:02 UTC │
	│ addons  │ enable metrics-server -p no-preload-080337 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-080337            │ jenkins │ v1.37.0 │ 13 Oct 25 22:02 UTC │                     │
	│ stop    │ -p no-preload-080337 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-080337            │ jenkins │ v1.37.0 │ 13 Oct 25 22:02 UTC │ 13 Oct 25 22:02 UTC │
	│ addons  │ enable dashboard -p no-preload-080337 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-080337            │ jenkins │ v1.37.0 │ 13 Oct 25 22:02 UTC │ 13 Oct 25 22:02 UTC │
	│ start   │ -p no-preload-080337 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-080337            │ jenkins │ v1.37.0 │ 13 Oct 25 22:02 UTC │ 13 Oct 25 22:03 UTC │
	│ image   │ old-k8s-version-534822 image list --format=json                                                                                                                                                                                               │ old-k8s-version-534822       │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │ 13 Oct 25 22:03 UTC │
	│ pause   │ -p old-k8s-version-534822 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-534822       │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │                     │
	│ start   │ -p kubernetes-upgrade-050146 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-050146    │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │                     │
	│ start   │ -p kubernetes-upgrade-050146 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-050146    │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │ 13 Oct 25 22:03 UTC │
	│ delete  │ -p old-k8s-version-534822                                                                                                                                                                                                                     │ old-k8s-version-534822       │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │ 13 Oct 25 22:03 UTC │
	│ delete  │ -p old-k8s-version-534822                                                                                                                                                                                                                     │ old-k8s-version-534822       │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │ 13 Oct 25 22:03 UTC │
	│ start   │ -p embed-certs-521669 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-521669           │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │                     │
	│ delete  │ -p kubernetes-upgrade-050146                                                                                                                                                                                                                  │ kubernetes-upgrade-050146    │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │ 13 Oct 25 22:03 UTC │
	│ delete  │ -p disable-driver-mounts-659143                                                                                                                                                                                                               │ disable-driver-mounts-659143 │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │ 13 Oct 25 22:03 UTC │
	│ start   │ -p default-k8s-diff-port-505851 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-505851 │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │ 13 Oct 25 22:03 UTC │
	│ image   │ no-preload-080337 image list --format=json                                                                                                                                                                                                    │ no-preload-080337            │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │ 13 Oct 25 22:03 UTC │
	│ pause   │ -p no-preload-080337 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-080337            │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │                     │
	│ delete  │ -p no-preload-080337                                                                                                                                                                                                                          │ no-preload-080337            │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │ 13 Oct 25 22:03 UTC │
	│ start   │ -p cert-expiration-894101 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-894101       │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │ 13 Oct 25 22:03 UTC │
	│ delete  │ -p no-preload-080337                                                                                                                                                                                                                          │ no-preload-080337            │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │ 13 Oct 25 22:03 UTC │
	│ start   │ -p newest-cni-843554 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-843554            │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │ 13 Oct 25 22:04 UTC │
	│ delete  │ -p cert-expiration-894101                                                                                                                                                                                                                     │ cert-expiration-894101       │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │ 13 Oct 25 22:03 UTC │
	│ start   │ -p auto-200102 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-200102                  │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-505851 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-505851 │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-843554 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-843554            │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 22:03:48
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 22:03:48.029761  487583 out.go:360] Setting OutFile to fd 1 ...
	I1013 22:03:48.030023  487583 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:03:48.030035  487583 out.go:374] Setting ErrFile to fd 2...
	I1013 22:03:48.030041  487583 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:03:48.030296  487583 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-226873/.minikube/bin
	I1013 22:03:48.030780  487583 out.go:368] Setting JSON to false
	I1013 22:03:48.031936  487583 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6376,"bootTime":1760386652,"procs":342,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1013 22:03:48.032065  487583 start.go:141] virtualization: kvm guest
	I1013 22:03:48.034430  487583 out.go:179] * [auto-200102] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1013 22:03:48.036358  487583 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 22:03:48.036414  487583 notify.go:220] Checking for updates...
	I1013 22:03:48.038906  487583 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 22:03:48.040504  487583 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-226873/kubeconfig
	I1013 22:03:48.041872  487583 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-226873/.minikube
	I1013 22:03:48.043243  487583 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1013 22:03:48.044845  487583 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 22:03:48.046696  487583 config.go:182] Loaded profile config "default-k8s-diff-port-505851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:03:48.046819  487583 config.go:182] Loaded profile config "embed-certs-521669": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:03:48.046968  487583 config.go:182] Loaded profile config "newest-cni-843554": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:03:48.047110  487583 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 22:03:48.073525  487583 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1013 22:03:48.073625  487583 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 22:03:48.138195  487583 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-13 22:03:48.12659078 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652166656 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1013 22:03:48.138327  487583 docker.go:318] overlay module found
	I1013 22:03:48.140592  487583 out.go:179] * Using the docker driver based on user configuration
	I1013 22:03:48.142124  487583 start.go:305] selected driver: docker
	I1013 22:03:48.142142  487583 start.go:925] validating driver "docker" against <nil>
	I1013 22:03:48.142153  487583 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 22:03:48.142712  487583 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 22:03:48.216084  487583 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-13 22:03:48.198359217 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652166656 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1013 22:03:48.216338  487583 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1013 22:03:48.216566  487583 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 22:03:48.218603  487583 out.go:179] * Using Docker driver with root privileges
	I1013 22:03:48.220161  487583 cni.go:84] Creating CNI manager for ""
	I1013 22:03:48.220255  487583 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 22:03:48.220270  487583 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1013 22:03:48.220345  487583 start.go:349] cluster config:
	{Name:auto-200102 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-200102 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I1013 22:03:48.221846  487583 out.go:179] * Starting "auto-200102" primary control-plane node in "auto-200102" cluster
	I1013 22:03:48.223068  487583 cache.go:123] Beginning downloading kic base image for docker with crio
	I1013 22:03:48.224361  487583 out.go:179] * Pulling base image v0.0.48-1760363564-21724 ...
	I1013 22:03:48.225605  487583 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 22:03:48.225650  487583 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon
	I1013 22:03:48.225657  487583 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-226873/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1013 22:03:48.225688  487583 cache.go:58] Caching tarball of preloaded images
	I1013 22:03:48.225840  487583 preload.go:233] Found /home/jenkins/minikube-integration/21724-226873/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1013 22:03:48.225851  487583 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1013 22:03:48.225978  487583 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/auto-200102/config.json ...
	I1013 22:03:48.226022  487583 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/auto-200102/config.json: {Name:mkf8a6685b530b08c33830ead99deec2c559bb78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:03:48.247357  487583 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon, skipping pull
	I1013 22:03:48.247381  487583 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 exists in daemon, skipping load
	I1013 22:03:48.247397  487583 cache.go:232] Successfully downloaded all kic artifacts
	I1013 22:03:48.247422  487583 start.go:360] acquireMachinesLock for auto-200102: {Name:mkec2895047b3318600813a981c122de09ee3451 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 22:03:48.247518  487583 start.go:364] duration metric: took 80.213µs to acquireMachinesLock for "auto-200102"
	I1013 22:03:48.247542  487583 start.go:93] Provisioning new machine with config: &{Name:auto-200102 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-200102 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1013 22:03:48.247615  487583 start.go:125] createHost starting for "" (driver="docker")
	I1013 22:03:44.989377  484490 cli_runner.go:164] Run: docker container inspect newest-cni-843554 --format={{.State.Running}}
	I1013 22:03:45.009605  484490 cli_runner.go:164] Run: docker container inspect newest-cni-843554 --format={{.State.Status}}
	I1013 22:03:45.032666  484490 cli_runner.go:164] Run: docker exec newest-cni-843554 stat /var/lib/dpkg/alternatives/iptables
	I1013 22:03:45.086247  484490 oci.go:144] the created container "newest-cni-843554" has a running status.
	I1013 22:03:45.086287  484490 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21724-226873/.minikube/machines/newest-cni-843554/id_rsa...
	I1013 22:03:45.645319  484490 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21724-226873/.minikube/machines/newest-cni-843554/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1013 22:03:45.674508  484490 cli_runner.go:164] Run: docker container inspect newest-cni-843554 --format={{.State.Status}}
	I1013 22:03:45.693914  484490 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1013 22:03:45.693946  484490 kic_runner.go:114] Args: [docker exec --privileged newest-cni-843554 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1013 22:03:45.746957  484490 cli_runner.go:164] Run: docker container inspect newest-cni-843554 --format={{.State.Status}}
	I1013 22:03:45.769075  484490 machine.go:93] provisionDockerMachine start ...
	I1013 22:03:45.769199  484490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-843554
	I1013 22:03:45.790048  484490 main.go:141] libmachine: Using SSH client type: native
	I1013 22:03:45.790402  484490 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1013 22:03:45.790425  484490 main.go:141] libmachine: About to run SSH command:
	hostname
	I1013 22:03:45.935251  484490 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-843554
	
	I1013 22:03:45.935281  484490 ubuntu.go:182] provisioning hostname "newest-cni-843554"
	I1013 22:03:45.935351  484490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-843554
	I1013 22:03:45.965899  484490 main.go:141] libmachine: Using SSH client type: native
	I1013 22:03:45.966222  484490 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1013 22:03:45.966246  484490 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-843554 && echo "newest-cni-843554" | sudo tee /etc/hostname
	I1013 22:03:46.121610  484490 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-843554
	
	I1013 22:03:46.121696  484490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-843554
	I1013 22:03:46.140604  484490 main.go:141] libmachine: Using SSH client type: native
	I1013 22:03:46.140967  484490 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1013 22:03:46.141012  484490 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-843554' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-843554/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-843554' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1013 22:03:46.279212  484490 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 22:03:46.279243  484490 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21724-226873/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-226873/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-226873/.minikube}
	I1013 22:03:46.279270  484490 ubuntu.go:190] setting up certificates
	I1013 22:03:46.279283  484490 provision.go:84] configureAuth start
	I1013 22:03:46.279344  484490 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-843554
	I1013 22:03:46.297002  484490 provision.go:143] copyHostCerts
	I1013 22:03:46.297075  484490 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-226873/.minikube/ca.pem, removing ...
	I1013 22:03:46.297089  484490 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-226873/.minikube/ca.pem
	I1013 22:03:46.297160  484490 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-226873/.minikube/ca.pem (1078 bytes)
	I1013 22:03:46.297282  484490 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-226873/.minikube/cert.pem, removing ...
	I1013 22:03:46.297295  484490 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-226873/.minikube/cert.pem
	I1013 22:03:46.297326  484490 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-226873/.minikube/cert.pem (1123 bytes)
	I1013 22:03:46.297424  484490 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-226873/.minikube/key.pem, removing ...
	I1013 22:03:46.297433  484490 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-226873/.minikube/key.pem
	I1013 22:03:46.297458  484490 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-226873/.minikube/key.pem (1679 bytes)
	I1013 22:03:46.297513  484490 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-226873/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca-key.pem org=jenkins.newest-cni-843554 san=[127.0.0.1 192.168.94.2 localhost minikube newest-cni-843554]
	I1013 22:03:46.464762  484490 provision.go:177] copyRemoteCerts
	I1013 22:03:46.464825  484490 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1013 22:03:46.464863  484490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-843554
	I1013 22:03:46.483946  484490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/newest-cni-843554/id_rsa Username:docker}
	I1013 22:03:46.584978  484490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1013 22:03:46.605571  484490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1013 22:03:46.624155  484490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1013 22:03:46.642517  484490 provision.go:87] duration metric: took 363.219497ms to configureAuth
	I1013 22:03:46.642544  484490 ubuntu.go:206] setting minikube options for container-runtime
	I1013 22:03:46.642726  484490 config.go:182] Loaded profile config "newest-cni-843554": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:03:46.642860  484490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-843554
	I1013 22:03:46.660981  484490 main.go:141] libmachine: Using SSH client type: native
	I1013 22:03:46.661280  484490 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1013 22:03:46.661309  484490 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1013 22:03:46.923912  484490 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1013 22:03:46.923945  484490 machine.go:96] duration metric: took 1.154833734s to provisionDockerMachine
	I1013 22:03:46.923960  484490 client.go:171] duration metric: took 6.852553107s to LocalClient.Create
	I1013 22:03:46.924102  484490 start.go:167] duration metric: took 6.852623673s to libmachine.API.Create "newest-cni-843554"
	I1013 22:03:46.924125  484490 start.go:293] postStartSetup for "newest-cni-843554" (driver="docker")
	I1013 22:03:46.924140  484490 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1013 22:03:46.924216  484490 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1013 22:03:46.924275  484490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-843554
	I1013 22:03:46.942648  484490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/newest-cni-843554/id_rsa Username:docker}
	I1013 22:03:47.049535  484490 ssh_runner.go:195] Run: cat /etc/os-release
	I1013 22:03:47.053688  484490 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1013 22:03:47.053722  484490 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1013 22:03:47.053737  484490 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-226873/.minikube/addons for local assets ...
	I1013 22:03:47.053800  484490 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-226873/.minikube/files for local assets ...
	I1013 22:03:47.053900  484490 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-226873/.minikube/files/etc/ssl/certs/2309292.pem -> 2309292.pem in /etc/ssl/certs
	I1013 22:03:47.054053  484490 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1013 22:03:47.063687  484490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/files/etc/ssl/certs/2309292.pem --> /etc/ssl/certs/2309292.pem (1708 bytes)
	I1013 22:03:47.086898  484490 start.go:296] duration metric: took 162.754626ms for postStartSetup
	I1013 22:03:47.087348  484490 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-843554
	I1013 22:03:47.105824  484490 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/newest-cni-843554/config.json ...
	I1013 22:03:47.106168  484490 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1013 22:03:47.106225  484490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-843554
	I1013 22:03:47.125215  484490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/newest-cni-843554/id_rsa Username:docker}
	I1013 22:03:47.222639  484490 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1013 22:03:47.227465  484490 start.go:128] duration metric: took 7.159265299s to createHost
	I1013 22:03:47.227489  484490 start.go:83] releasing machines lock for "newest-cni-843554", held for 7.159444146s
	I1013 22:03:47.227552  484490 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-843554
	I1013 22:03:47.245500  484490 ssh_runner.go:195] Run: cat /version.json
	I1013 22:03:47.245554  484490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-843554
	I1013 22:03:47.245598  484490 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1013 22:03:47.245692  484490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-843554
	I1013 22:03:47.264930  484490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/newest-cni-843554/id_rsa Username:docker}
	I1013 22:03:47.265089  484490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/newest-cni-843554/id_rsa Username:docker}
	I1013 22:03:47.378822  484490 ssh_runner.go:195] Run: systemctl --version
	I1013 22:03:47.469539  484490 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1013 22:03:47.508492  484490 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1013 22:03:47.513761  484490 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1013 22:03:47.513836  484490 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1013 22:03:47.552336  484490 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1013 22:03:47.552364  484490 start.go:495] detecting cgroup driver to use...
	I1013 22:03:47.552405  484490 detect.go:190] detected "systemd" cgroup driver on host os
	I1013 22:03:47.552458  484490 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1013 22:03:47.570253  484490 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1013 22:03:47.585423  484490 docker.go:218] disabling cri-docker service (if available) ...
	I1013 22:03:47.585487  484490 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1013 22:03:47.604505  484490 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1013 22:03:47.624493  484490 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1013 22:03:47.711886  484490 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1013 22:03:47.808332  484490 docker.go:234] disabling docker service ...
	I1013 22:03:47.808406  484490 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1013 22:03:47.831438  484490 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1013 22:03:47.846564  484490 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1013 22:03:47.936209  484490 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1013 22:03:48.024401  484490 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1013 22:03:48.039068  484490 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1013 22:03:48.055762  484490 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1013 22:03:48.055833  484490 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:03:48.068981  484490 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1013 22:03:48.069065  484490 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:03:48.079720  484490 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:03:48.089729  484490 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:03:48.101605  484490 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1013 22:03:48.113238  484490 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:03:48.124242  484490 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:03:48.141603  484490 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:03:48.151952  484490 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1013 22:03:48.161324  484490 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1013 22:03:48.171468  484490 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:03:48.262062  484490 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1013 22:03:48.384584  484490 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1013 22:03:48.384675  484490 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1013 22:03:48.389029  484490 start.go:563] Will wait 60s for crictl version
	I1013 22:03:48.389089  484490 ssh_runner.go:195] Run: which crictl
	I1013 22:03:48.393403  484490 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1013 22:03:48.422366  484490 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1013 22:03:48.422453  484490 ssh_runner.go:195] Run: crio --version
	I1013 22:03:48.459752  484490 ssh_runner.go:195] Run: crio --version
	I1013 22:03:48.496533  484490 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1013 22:03:48.498541  484490 cli_runner.go:164] Run: docker network inspect newest-cni-843554 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 22:03:48.518781  484490 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1013 22:03:48.523724  484490 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 22:03:48.539804  484490 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1013 22:03:48.541518  484490 kubeadm.go:883] updating cluster {Name:newest-cni-843554 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-843554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1013 22:03:48.541684  484490 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 22:03:48.541758  484490 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 22:03:48.589195  484490 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 22:03:48.589221  484490 crio.go:433] Images already preloaded, skipping extraction
	I1013 22:03:48.589276  484490 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 22:03:48.619839  484490 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 22:03:48.619867  484490 cache_images.go:85] Images are preloaded, skipping loading
	I1013 22:03:48.619877  484490 kubeadm.go:934] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1013 22:03:48.620027  484490 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-843554 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-843554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1013 22:03:48.620125  484490 ssh_runner.go:195] Run: crio config
	I1013 22:03:48.675020  484490 cni.go:84] Creating CNI manager for ""
	I1013 22:03:48.675054  484490 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 22:03:48.675082  484490 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1013 22:03:48.675113  484490 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-843554 NodeName:newest-cni-843554 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1013 22:03:48.675349  484490 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-843554"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1013 22:03:48.675426  484490 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1013 22:03:48.685765  484490 binaries.go:44] Found k8s binaries, skipping transfer
	I1013 22:03:48.685840  484490 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1013 22:03:48.700492  484490 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1013 22:03:48.715480  484490 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1013 22:03:48.736843  484490 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1013 22:03:48.753852  484490 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1013 22:03:48.758556  484490 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 22:03:48.770251  484490 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:03:48.877354  484490 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 22:03:48.902559  484490 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/newest-cni-843554 for IP: 192.168.94.2
	I1013 22:03:48.902585  484490 certs.go:195] generating shared ca certs ...
	I1013 22:03:48.902609  484490 certs.go:227] acquiring lock for ca certs: {Name:mk5abdb742abbab05bc35d961f97579f54806d56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:03:48.902877  484490 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-226873/.minikube/ca.key
	I1013 22:03:48.902985  484490 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-226873/.minikube/proxy-client-ca.key
	I1013 22:03:48.903020  484490 certs.go:257] generating profile certs ...
	I1013 22:03:48.903097  484490 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/newest-cni-843554/client.key
	I1013 22:03:48.903126  484490 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/newest-cni-843554/client.crt with IP's: []
	I1013 22:03:49.057720  484490 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/newest-cni-843554/client.crt ...
	I1013 22:03:49.057751  484490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/newest-cni-843554/client.crt: {Name:mk7b8adddbfe017f323f38ba72916ea92982169d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:03:49.057949  484490 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/newest-cni-843554/client.key ...
	I1013 22:03:49.057965  484490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/newest-cni-843554/client.key: {Name:mk440f864649a393170c3a076e9f3a5d9875385d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:03:49.058104  484490 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/newest-cni-843554/apiserver.key.20622c83
	I1013 22:03:49.058124  484490 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/newest-cni-843554/apiserver.crt.20622c83 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1013 22:03:49.214169  484490 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/newest-cni-843554/apiserver.crt.20622c83 ...
	I1013 22:03:49.214201  484490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/newest-cni-843554/apiserver.crt.20622c83: {Name:mkef5e95e537af606c6578cec70e1202f77a6fc4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:03:49.214360  484490 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/newest-cni-843554/apiserver.key.20622c83 ...
	I1013 22:03:49.214373  484490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/newest-cni-843554/apiserver.key.20622c83: {Name:mka0c547785c945644da162e5224e48ce3abdc52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:03:49.214444  484490 certs.go:382] copying /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/newest-cni-843554/apiserver.crt.20622c83 -> /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/newest-cni-843554/apiserver.crt
	I1013 22:03:49.214538  484490 certs.go:386] copying /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/newest-cni-843554/apiserver.key.20622c83 -> /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/newest-cni-843554/apiserver.key
	I1013 22:03:49.214602  484490 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/newest-cni-843554/proxy-client.key
	I1013 22:03:49.214619  484490 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/newest-cni-843554/proxy-client.crt with IP's: []
	I1013 22:03:49.321721  484490 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/newest-cni-843554/proxy-client.crt ...
	I1013 22:03:49.321754  484490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/newest-cni-843554/proxy-client.crt: {Name:mkd0a7460df55f794e99e82014d619b44d916362 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:03:49.321922  484490 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/newest-cni-843554/proxy-client.key ...
	I1013 22:03:49.321936  484490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/newest-cni-843554/proxy-client.key: {Name:mk1bcdb3df4e29f352f461649b9c23e45dfbcd8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:03:49.322144  484490 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/230929.pem (1338 bytes)
	W1013 22:03:49.322182  484490 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-226873/.minikube/certs/230929_empty.pem, impossibly tiny 0 bytes
	I1013 22:03:49.322192  484490 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca-key.pem (1675 bytes)
	I1013 22:03:49.322260  484490 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem (1078 bytes)
	I1013 22:03:49.322289  484490 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/cert.pem (1123 bytes)
	I1013 22:03:49.322308  484490 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/key.pem (1679 bytes)
	I1013 22:03:49.322345  484490 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/files/etc/ssl/certs/2309292.pem (1708 bytes)
	I1013 22:03:49.322872  484490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1013 22:03:49.345608  484490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1013 22:03:49.365381  484490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1013 22:03:49.386555  484490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1013 22:03:49.408327  484490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/newest-cni-843554/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1013 22:03:49.427607  484490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/newest-cni-843554/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1013 22:03:49.448524  484490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/newest-cni-843554/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1013 22:03:49.468469  484490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/newest-cni-843554/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1013 22:03:49.488745  484490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/files/etc/ssl/certs/2309292.pem --> /usr/share/ca-certificates/2309292.pem (1708 bytes)
	I1013 22:03:49.512177  484490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1013 22:03:49.532121  484490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/certs/230929.pem --> /usr/share/ca-certificates/230929.pem (1338 bytes)
	I1013 22:03:49.552092  484490 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1013 22:03:49.565826  484490 ssh_runner.go:195] Run: openssl version
	I1013 22:03:49.572548  484490 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1013 22:03:49.582259  484490 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:03:49.587260  484490 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 13 21:18 /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:03:49.587333  484490 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:03:49.623399  484490 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1013 22:03:49.633159  484490 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/230929.pem && ln -fs /usr/share/ca-certificates/230929.pem /etc/ssl/certs/230929.pem"
	I1013 22:03:49.642444  484490 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/230929.pem
	I1013 22:03:49.646726  484490 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 13 21:24 /usr/share/ca-certificates/230929.pem
	I1013 22:03:49.646797  484490 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/230929.pem
	I1013 22:03:49.693448  484490 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/230929.pem /etc/ssl/certs/51391683.0"
	I1013 22:03:49.703828  484490 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2309292.pem && ln -fs /usr/share/ca-certificates/2309292.pem /etc/ssl/certs/2309292.pem"
	I1013 22:03:49.714522  484490 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2309292.pem
	I1013 22:03:49.719340  484490 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 13 21:24 /usr/share/ca-certificates/2309292.pem
	I1013 22:03:49.719406  484490 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2309292.pem
	I1013 22:03:49.755560  484490 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2309292.pem /etc/ssl/certs/3ec20f2e.0"
	I1013 22:03:49.766938  484490 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1013 22:03:49.771346  484490 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1013 22:03:49.771419  484490 kubeadm.go:400] StartCluster: {Name:newest-cni-843554 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-843554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 22:03:49.771494  484490 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 22:03:49.771549  484490 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 22:03:49.803164  484490 cri.go:89] found id: ""
	I1013 22:03:49.803236  484490 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1013 22:03:49.812701  484490 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1013 22:03:49.821923  484490 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1013 22:03:49.822014  484490 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1013 22:03:49.831379  484490 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1013 22:03:49.831400  484490 kubeadm.go:157] found existing configuration files:
	
	I1013 22:03:49.831460  484490 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1013 22:03:49.841491  484490 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1013 22:03:49.841561  484490 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1013 22:03:49.851154  484490 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1013 22:03:49.860270  484490 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1013 22:03:49.860346  484490 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1013 22:03:49.868788  484490 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1013 22:03:49.877706  484490 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1013 22:03:49.877797  484490 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1013 22:03:49.886425  484490 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1013 22:03:49.895971  484490 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1013 22:03:49.896067  484490 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1013 22:03:49.904646  484490 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1013 22:03:49.943671  484490 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1013 22:03:49.943748  484490 kubeadm.go:318] [preflight] Running pre-flight checks
	I1013 22:03:49.968544  484490 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1013 22:03:49.968630  484490 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1013 22:03:49.968729  484490 kubeadm.go:318] OS: Linux
	I1013 22:03:49.968806  484490 kubeadm.go:318] CGROUPS_CPU: enabled
	I1013 22:03:49.968882  484490 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1013 22:03:49.968956  484490 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1013 22:03:49.969052  484490 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1013 22:03:49.969122  484490 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1013 22:03:49.969195  484490 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1013 22:03:49.969294  484490 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1013 22:03:49.969375  484490 kubeadm.go:318] CGROUPS_IO: enabled
	I1013 22:03:50.036448  484490 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1013 22:03:50.036634  484490 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1013 22:03:50.036804  484490 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1013 22:03:50.045449  484490 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1013 22:03:46.829086  477441 node_ready.go:57] node "default-k8s-diff-port-505851" has "Ready":"False" status (will retry)
	W1013 22:03:49.329207  477441 node_ready.go:57] node "default-k8s-diff-port-505851" has "Ready":"False" status (will retry)
	W1013 22:03:47.897468  476377 node_ready.go:57] node "embed-certs-521669" has "Ready":"False" status (will retry)
	W1013 22:03:50.396885  476377 node_ready.go:57] node "embed-certs-521669" has "Ready":"False" status (will retry)
	I1013 22:03:48.250461  487583 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1013 22:03:48.250685  487583 start.go:159] libmachine.API.Create for "auto-200102" (driver="docker")
	I1013 22:03:48.250716  487583 client.go:168] LocalClient.Create starting
	I1013 22:03:48.250779  487583 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem
	I1013 22:03:48.250824  487583 main.go:141] libmachine: Decoding PEM data...
	I1013 22:03:48.250850  487583 main.go:141] libmachine: Parsing certificate...
	I1013 22:03:48.250920  487583 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21724-226873/.minikube/certs/cert.pem
	I1013 22:03:48.250945  487583 main.go:141] libmachine: Decoding PEM data...
	I1013 22:03:48.250955  487583 main.go:141] libmachine: Parsing certificate...
	I1013 22:03:48.251378  487583 cli_runner.go:164] Run: docker network inspect auto-200102 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1013 22:03:48.269584  487583 cli_runner.go:211] docker network inspect auto-200102 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1013 22:03:48.269669  487583 network_create.go:284] running [docker network inspect auto-200102] to gather additional debugging logs...
	I1013 22:03:48.269693  487583 cli_runner.go:164] Run: docker network inspect auto-200102
	W1013 22:03:48.290081  487583 cli_runner.go:211] docker network inspect auto-200102 returned with exit code 1
	I1013 22:03:48.290118  487583 network_create.go:287] error running [docker network inspect auto-200102]: docker network inspect auto-200102: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-200102 not found
	I1013 22:03:48.290134  487583 network_create.go:289] output of [docker network inspect auto-200102]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-200102 not found
	
	** /stderr **
	I1013 22:03:48.290257  487583 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 22:03:48.308172  487583 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7d83a8e6a805 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ea:69:47:54:f9:98} reservation:<nil>}
	I1013 22:03:48.308839  487583 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-35c0cecee577 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:f2:41:bc:f8:12:32} reservation:<nil>}
	I1013 22:03:48.309630  487583 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-2e951fbeb08e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ba:fb:be:51:da:97} reservation:<nil>}
	I1013 22:03:48.310415  487583 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-bd127c16ad94 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:da:91:d2:e9:26:c1} reservation:<nil>}
	I1013 22:03:48.311447  487583 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001fb4820}
	I1013 22:03:48.311476  487583 network_create.go:124] attempt to create docker network auto-200102 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1013 22:03:48.311546  487583 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-200102 auto-200102
	I1013 22:03:48.377591  487583 network_create.go:108] docker network auto-200102 192.168.85.0/24 created
	I1013 22:03:48.377626  487583 kic.go:121] calculated static IP "192.168.85.2" for the "auto-200102" container
	I1013 22:03:48.377682  487583 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1013 22:03:48.398421  487583 cli_runner.go:164] Run: docker volume create auto-200102 --label name.minikube.sigs.k8s.io=auto-200102 --label created_by.minikube.sigs.k8s.io=true
	I1013 22:03:48.418179  487583 oci.go:103] Successfully created a docker volume auto-200102
	I1013 22:03:48.418284  487583 cli_runner.go:164] Run: docker run --rm --name auto-200102-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-200102 --entrypoint /usr/bin/test -v auto-200102:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -d /var/lib
	I1013 22:03:48.836404  487583 oci.go:107] Successfully prepared a docker volume auto-200102
	I1013 22:03:48.836446  487583 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 22:03:48.836473  487583 kic.go:194] Starting extracting preloaded images to volume ...
	I1013 22:03:48.836531  487583 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21724-226873/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v auto-200102:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -I lz4 -xf /preloaded.tar -C /extractDir
	I1013 22:03:50.048505  484490 out.go:252]   - Generating certificates and keys ...
	I1013 22:03:50.048632  484490 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1013 22:03:50.048750  484490 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1013 22:03:50.091988  484490 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1013 22:03:50.812126  484490 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1013 22:03:51.090226  484490 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1013 22:03:51.169725  484490 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1013 22:03:51.456659  484490 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1013 22:03:51.456791  484490 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-843554] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1013 22:03:51.756078  484490 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1013 22:03:51.756203  484490 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-843554] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1013 22:03:52.153252  484490 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1013 22:03:52.289647  484490 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1013 22:03:52.667517  484490 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1013 22:03:52.667604  484490 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1013 22:03:52.883918  484490 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1013 22:03:52.962553  484490 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1013 22:03:53.337169  484490 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1013 22:03:54.049057  484490 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1013 22:03:54.118769  484490 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1013 22:03:54.118915  484490 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1013 22:03:54.126793  484490 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1013 22:03:54.129568  484490 out.go:252]   - Booting up control plane ...
	I1013 22:03:54.129744  484490 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1013 22:03:54.129875  484490 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1013 22:03:54.130021  484490 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1013 22:03:54.160861  484490 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1013 22:03:54.161068  484490 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1013 22:03:54.170584  484490 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1013 22:03:54.171370  484490 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1013 22:03:54.171506  484490 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1013 22:03:54.286488  484490 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1013 22:03:54.286636  484490 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	W1013 22:03:51.829574  477441 node_ready.go:57] node "default-k8s-diff-port-505851" has "Ready":"False" status (will retry)
	W1013 22:03:54.328986  477441 node_ready.go:57] node "default-k8s-diff-port-505851" has "Ready":"False" status (will retry)
	I1013 22:03:55.329559  477441 node_ready.go:49] node "default-k8s-diff-port-505851" is "Ready"
	I1013 22:03:55.329588  477441 node_ready.go:38] duration metric: took 10.503950666s for node "default-k8s-diff-port-505851" to be "Ready" ...
	I1013 22:03:55.329604  477441 api_server.go:52] waiting for apiserver process to appear ...
	I1013 22:03:55.329651  477441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 22:03:55.342935  477441 api_server.go:72] duration metric: took 11.238841693s to wait for apiserver process to appear ...
	I1013 22:03:55.342963  477441 api_server.go:88] waiting for apiserver healthz status ...
	I1013 22:03:55.342986  477441 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1013 22:03:55.348851  477441 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1013 22:03:55.349977  477441 api_server.go:141] control plane version: v1.34.1
	I1013 22:03:55.350033  477441 api_server.go:131] duration metric: took 7.06078ms to wait for apiserver health ...
	I1013 22:03:55.350046  477441 system_pods.go:43] waiting for kube-system pods to appear ...
	I1013 22:03:55.353189  477441 system_pods.go:59] 8 kube-system pods found
	I1013 22:03:55.353230  477441 system_pods.go:61] "coredns-66bc5c9577-5x8dn" [2b78411d-d81f-4b88-9a8d-921f7c26ec16] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 22:03:55.353251  477441 system_pods.go:61] "etcd-default-k8s-diff-port-505851" [aed8b3be-779b-41fa-a0a3-d935cdc6ad0b] Running
	I1013 22:03:55.353264  477441 system_pods.go:61] "kindnet-m5whc" [f794ce45-bb06-44ce-beae-bffe3ff9d2c0] Running
	I1013 22:03:55.353273  477441 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-505851" [d7c818e1-b20b-40aa-afe6-7032c378c841] Running
	I1013 22:03:55.353282  477441 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-505851" [d6f5cccc-8810-4862-9add-7319d03ca442] Running
	I1013 22:03:55.353384  477441 system_pods.go:61] "kube-proxy-27pnt" [3cb84f83-962c-4830-bdad-0084bc59a7c4] Running
	I1013 22:03:55.353393  477441 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-505851" [7481baab-00e9-4015-bf26-4e389a1bf472] Running
	I1013 22:03:55.353405  477441 system_pods.go:61] "storage-provisioner" [2b8d56b5-894f-44d4-8b07-d3507c981fc0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 22:03:55.353422  477441 system_pods.go:74] duration metric: took 3.363327ms to wait for pod list to return data ...
	I1013 22:03:55.353439  477441 default_sa.go:34] waiting for default service account to be created ...
	I1013 22:03:55.355859  477441 default_sa.go:45] found service account: "default"
	I1013 22:03:55.355880  477441 default_sa.go:55] duration metric: took 2.430374ms for default service account to be created ...
	I1013 22:03:55.355889  477441 system_pods.go:116] waiting for k8s-apps to be running ...
	I1013 22:03:55.358605  477441 system_pods.go:86] 8 kube-system pods found
	I1013 22:03:55.358632  477441 system_pods.go:89] "coredns-66bc5c9577-5x8dn" [2b78411d-d81f-4b88-9a8d-921f7c26ec16] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 22:03:55.358639  477441 system_pods.go:89] "etcd-default-k8s-diff-port-505851" [aed8b3be-779b-41fa-a0a3-d935cdc6ad0b] Running
	I1013 22:03:55.358645  477441 system_pods.go:89] "kindnet-m5whc" [f794ce45-bb06-44ce-beae-bffe3ff9d2c0] Running
	I1013 22:03:55.358651  477441 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-505851" [d7c818e1-b20b-40aa-afe6-7032c378c841] Running
	I1013 22:03:55.358659  477441 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-505851" [d6f5cccc-8810-4862-9add-7319d03ca442] Running
	I1013 22:03:55.358669  477441 system_pods.go:89] "kube-proxy-27pnt" [3cb84f83-962c-4830-bdad-0084bc59a7c4] Running
	I1013 22:03:55.358673  477441 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-505851" [7481baab-00e9-4015-bf26-4e389a1bf472] Running
	I1013 22:03:55.358680  477441 system_pods.go:89] "storage-provisioner" [2b8d56b5-894f-44d4-8b07-d3507c981fc0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 22:03:55.358716  477441 retry.go:31] will retry after 253.55772ms: missing components: kube-dns
	I1013 22:03:55.619570  477441 system_pods.go:86] 8 kube-system pods found
	I1013 22:03:55.619654  477441 system_pods.go:89] "coredns-66bc5c9577-5x8dn" [2b78411d-d81f-4b88-9a8d-921f7c26ec16] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 22:03:55.619702  477441 system_pods.go:89] "etcd-default-k8s-diff-port-505851" [aed8b3be-779b-41fa-a0a3-d935cdc6ad0b] Running
	I1013 22:03:55.619721  477441 system_pods.go:89] "kindnet-m5whc" [f794ce45-bb06-44ce-beae-bffe3ff9d2c0] Running
	I1013 22:03:55.619728  477441 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-505851" [d7c818e1-b20b-40aa-afe6-7032c378c841] Running
	I1013 22:03:55.619733  477441 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-505851" [d6f5cccc-8810-4862-9add-7319d03ca442] Running
	I1013 22:03:55.619743  477441 system_pods.go:89] "kube-proxy-27pnt" [3cb84f83-962c-4830-bdad-0084bc59a7c4] Running
	I1013 22:03:55.619757  477441 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-505851" [7481baab-00e9-4015-bf26-4e389a1bf472] Running
	I1013 22:03:55.619787  477441 system_pods.go:89] "storage-provisioner" [2b8d56b5-894f-44d4-8b07-d3507c981fc0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 22:03:55.619814  477441 retry.go:31] will retry after 279.508132ms: missing components: kube-dns
	W1013 22:03:52.897348  476377 node_ready.go:57] node "embed-certs-521669" has "Ready":"False" status (will retry)
	W1013 22:03:55.398548  476377 node_ready.go:57] node "embed-certs-521669" has "Ready":"False" status (will retry)
	I1013 22:03:53.444083  487583 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21724-226873/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v auto-200102:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -I lz4 -xf /preloaded.tar -C /extractDir: (4.60749131s)
	I1013 22:03:53.444126  487583 kic.go:203] duration metric: took 4.607647301s to extract preloaded images to volume ...
	W1013 22:03:53.444253  487583 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1013 22:03:53.444294  487583 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1013 22:03:53.444355  487583 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1013 22:03:53.505540  487583 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-200102 --name auto-200102 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-200102 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-200102 --network auto-200102 --ip 192.168.85.2 --volume auto-200102:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225
	I1013 22:03:53.797795  487583 cli_runner.go:164] Run: docker container inspect auto-200102 --format={{.State.Running}}
	I1013 22:03:53.818073  487583 cli_runner.go:164] Run: docker container inspect auto-200102 --format={{.State.Status}}
	I1013 22:03:53.838163  487583 cli_runner.go:164] Run: docker exec auto-200102 stat /var/lib/dpkg/alternatives/iptables
	I1013 22:03:53.888227  487583 oci.go:144] the created container "auto-200102" has a running status.
	I1013 22:03:53.888268  487583 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21724-226873/.minikube/machines/auto-200102/id_rsa...
	I1013 22:03:54.127683  487583 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21724-226873/.minikube/machines/auto-200102/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1013 22:03:54.165534  487583 cli_runner.go:164] Run: docker container inspect auto-200102 --format={{.State.Status}}
	I1013 22:03:54.190445  487583 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1013 22:03:54.190471  487583 kic_runner.go:114] Args: [docker exec --privileged auto-200102 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1013 22:03:54.250288  487583 cli_runner.go:164] Run: docker container inspect auto-200102 --format={{.State.Status}}
	I1013 22:03:54.271713  487583 machine.go:93] provisionDockerMachine start ...
	I1013 22:03:54.271839  487583 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-200102
	I1013 22:03:54.294100  487583 main.go:141] libmachine: Using SSH client type: native
	I1013 22:03:54.294394  487583 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1013 22:03:54.294410  487583 main.go:141] libmachine: About to run SSH command:
	hostname
	I1013 22:03:54.440486  487583 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-200102
	
	I1013 22:03:54.440521  487583 ubuntu.go:182] provisioning hostname "auto-200102"
	I1013 22:03:54.440599  487583 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-200102
	I1013 22:03:54.460711  487583 main.go:141] libmachine: Using SSH client type: native
	I1013 22:03:54.461103  487583 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1013 22:03:54.461129  487583 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-200102 && echo "auto-200102" | sudo tee /etc/hostname
	I1013 22:03:54.614535  487583 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-200102
	
	I1013 22:03:54.614622  487583 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-200102
	I1013 22:03:54.634586  487583 main.go:141] libmachine: Using SSH client type: native
	I1013 22:03:54.634913  487583 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1013 22:03:54.634939  487583 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-200102' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-200102/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-200102' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1013 22:03:54.775376  487583 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 22:03:54.775453  487583 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21724-226873/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-226873/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-226873/.minikube}
	I1013 22:03:54.775507  487583 ubuntu.go:190] setting up certificates
	I1013 22:03:54.775525  487583 provision.go:84] configureAuth start
	I1013 22:03:54.775607  487583 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-200102
	I1013 22:03:54.794893  487583 provision.go:143] copyHostCerts
	I1013 22:03:54.794955  487583 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-226873/.minikube/ca.pem, removing ...
	I1013 22:03:54.794966  487583 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-226873/.minikube/ca.pem
	I1013 22:03:54.795082  487583 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-226873/.minikube/ca.pem (1078 bytes)
	I1013 22:03:54.795182  487583 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-226873/.minikube/cert.pem, removing ...
	I1013 22:03:54.795192  487583 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-226873/.minikube/cert.pem
	I1013 22:03:54.795220  487583 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-226873/.minikube/cert.pem (1123 bytes)
	I1013 22:03:54.795279  487583 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-226873/.minikube/key.pem, removing ...
	I1013 22:03:54.795286  487583 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-226873/.minikube/key.pem
	I1013 22:03:54.795308  487583 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-226873/.minikube/key.pem (1679 bytes)
	I1013 22:03:54.795376  487583 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-226873/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca-key.pem org=jenkins.auto-200102 san=[127.0.0.1 192.168.85.2 auto-200102 localhost minikube]
	I1013 22:03:55.188429  487583 provision.go:177] copyRemoteCerts
	I1013 22:03:55.188510  487583 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1013 22:03:55.188566  487583 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-200102
	I1013 22:03:55.208580  487583 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/auto-200102/id_rsa Username:docker}
	I1013 22:03:55.315410  487583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1013 22:03:55.338226  487583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1013 22:03:55.360346  487583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1013 22:03:55.380477  487583 provision.go:87] duration metric: took 604.930225ms to configureAuth
	I1013 22:03:55.380510  487583 ubuntu.go:206] setting minikube options for container-runtime
	I1013 22:03:55.380713  487583 config.go:182] Loaded profile config "auto-200102": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:03:55.380859  487583 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-200102
	I1013 22:03:55.403380  487583 main.go:141] libmachine: Using SSH client type: native
	I1013 22:03:55.403687  487583 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1013 22:03:55.403708  487583 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1013 22:03:55.708397  487583 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1013 22:03:55.708428  487583 machine.go:96] duration metric: took 1.436678744s to provisionDockerMachine
	I1013 22:03:55.708441  487583 client.go:171] duration metric: took 7.457718927s to LocalClient.Create
	I1013 22:03:55.708465  487583 start.go:167] duration metric: took 7.457781344s to libmachine.API.Create "auto-200102"
	I1013 22:03:55.708474  487583 start.go:293] postStartSetup for "auto-200102" (driver="docker")
	I1013 22:03:55.708486  487583 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1013 22:03:55.708549  487583 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1013 22:03:55.708593  487583 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-200102
	I1013 22:03:55.731147  487583 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/auto-200102/id_rsa Username:docker}
	I1013 22:03:55.835461  487583 ssh_runner.go:195] Run: cat /etc/os-release
	I1013 22:03:55.839937  487583 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1013 22:03:55.839973  487583 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1013 22:03:55.839987  487583 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-226873/.minikube/addons for local assets ...
	I1013 22:03:55.840062  487583 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-226873/.minikube/files for local assets ...
	I1013 22:03:55.840155  487583 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-226873/.minikube/files/etc/ssl/certs/2309292.pem -> 2309292.pem in /etc/ssl/certs
	I1013 22:03:55.840296  487583 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1013 22:03:55.849304  487583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/files/etc/ssl/certs/2309292.pem --> /etc/ssl/certs/2309292.pem (1708 bytes)
	I1013 22:03:55.874132  487583 start.go:296] duration metric: took 165.640662ms for postStartSetup
	I1013 22:03:55.874609  487583 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-200102
	I1013 22:03:55.895464  487583 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/auto-200102/config.json ...
	I1013 22:03:55.895821  487583 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1013 22:03:55.895876  487583 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-200102
	I1013 22:03:55.918553  487583 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/auto-200102/id_rsa Username:docker}
	I1013 22:03:56.017585  487583 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1013 22:03:56.022832  487583 start.go:128] duration metric: took 7.775200902s to createHost
	I1013 22:03:56.022863  487583 start.go:83] releasing machines lock for "auto-200102", held for 7.775332897s
	I1013 22:03:56.022941  487583 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-200102
	I1013 22:03:56.042596  487583 ssh_runner.go:195] Run: cat /version.json
	I1013 22:03:56.042662  487583 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-200102
	I1013 22:03:56.042670  487583 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1013 22:03:56.042775  487583 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-200102
	I1013 22:03:56.063800  487583 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/auto-200102/id_rsa Username:docker}
	I1013 22:03:56.064255  487583 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/auto-200102/id_rsa Username:docker}
	I1013 22:03:56.159743  487583 ssh_runner.go:195] Run: systemctl --version
	I1013 22:03:56.225451  487583 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1013 22:03:56.267876  487583 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1013 22:03:56.273025  487583 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1013 22:03:56.273101  487583 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1013 22:03:56.301157  487583 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1013 22:03:56.301183  487583 start.go:495] detecting cgroup driver to use...
	I1013 22:03:56.301217  487583 detect.go:190] detected "systemd" cgroup driver on host os
	I1013 22:03:56.301264  487583 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1013 22:03:56.320515  487583 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1013 22:03:56.335527  487583 docker.go:218] disabling cri-docker service (if available) ...
	I1013 22:03:56.335597  487583 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1013 22:03:56.356535  487583 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1013 22:03:56.379123  487583 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1013 22:03:56.511060  487583 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1013 22:03:56.630403  487583 docker.go:234] disabling docker service ...
	I1013 22:03:56.630478  487583 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1013 22:03:56.654697  487583 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1013 22:03:56.669666  487583 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1013 22:03:56.772800  487583 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1013 22:03:56.867300  487583 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1013 22:03:56.883388  487583 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1013 22:03:56.902552  487583 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1013 22:03:56.902621  487583 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:03:56.914797  487583 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1013 22:03:56.914859  487583 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:03:56.924554  487583 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:03:56.934706  487583 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:03:56.944950  487583 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1013 22:03:56.954161  487583 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:03:56.963680  487583 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:03:56.978238  487583 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:03:56.988376  487583 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1013 22:03:56.997805  487583 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1013 22:03:57.008209  487583 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:03:57.102217  487583 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1013 22:03:57.219578  487583 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1013 22:03:57.219638  487583 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1013 22:03:57.224855  487583 start.go:563] Will wait 60s for crictl version
	I1013 22:03:57.224920  487583 ssh_runner.go:195] Run: which crictl
	I1013 22:03:57.228808  487583 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1013 22:03:57.258192  487583 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1013 22:03:57.258284  487583 ssh_runner.go:195] Run: crio --version
	I1013 22:03:57.294146  487583 ssh_runner.go:195] Run: crio --version
	I1013 22:03:57.330295  487583 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1013 22:03:57.331555  487583 cli_runner.go:164] Run: docker network inspect auto-200102 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 22:03:57.352700  487583 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1013 22:03:57.357856  487583 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 22:03:57.370430  487583 kubeadm.go:883] updating cluster {Name:auto-200102 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-200102 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1013 22:03:57.370591  487583 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 22:03:57.370677  487583 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 22:03:57.412033  487583 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 22:03:57.412066  487583 crio.go:433] Images already preloaded, skipping extraction
	I1013 22:03:57.412128  487583 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 22:03:57.446310  487583 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 22:03:57.446335  487583 cache_images.go:85] Images are preloaded, skipping loading
	I1013 22:03:57.446346  487583 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1013 22:03:57.446458  487583 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-200102 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-200102 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1013 22:03:57.446545  487583 ssh_runner.go:195] Run: crio config
	I1013 22:03:57.502468  487583 cni.go:84] Creating CNI manager for ""
	I1013 22:03:57.502491  487583 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 22:03:57.502510  487583 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1013 22:03:57.502535  487583 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-200102 NodeName:auto-200102 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1013 22:03:57.502675  487583 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-200102"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1013 22:03:57.502750  487583 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1013 22:03:57.514371  487583 binaries.go:44] Found k8s binaries, skipping transfer
	I1013 22:03:57.514451  487583 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1013 22:03:57.523122  487583 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I1013 22:03:57.536941  487583 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1013 22:03:57.554920  487583 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2207 bytes)
	I1013 22:03:57.571949  487583 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1013 22:03:57.576445  487583 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 22:03:57.588402  487583 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:03:57.676518  487583 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 22:03:57.702875  487583 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/auto-200102 for IP: 192.168.85.2
	I1013 22:03:57.702904  487583 certs.go:195] generating shared ca certs ...
	I1013 22:03:57.702930  487583 certs.go:227] acquiring lock for ca certs: {Name:mk5abdb742abbab05bc35d961f97579f54806d56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:03:57.703216  487583 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-226873/.minikube/ca.key
	I1013 22:03:57.703262  487583 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-226873/.minikube/proxy-client-ca.key
	I1013 22:03:57.703278  487583 certs.go:257] generating profile certs ...
	I1013 22:03:57.703341  487583 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/auto-200102/client.key
	I1013 22:03:57.703367  487583 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/auto-200102/client.crt with IP's: []
	I1013 22:03:57.851026  487583 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/auto-200102/client.crt ...
	I1013 22:03:57.851064  487583 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/auto-200102/client.crt: {Name:mk2579bf978c11798ebce23c8f9b2443dab8b152 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:03:57.851280  487583 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/auto-200102/client.key ...
	I1013 22:03:57.851296  487583 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/auto-200102/client.key: {Name:mk7f8f06d5515585f3c94e35065c0da5eafac2de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:03:57.851424  487583 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/auto-200102/apiserver.key.443b2274
	I1013 22:03:57.851447  487583 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/auto-200102/apiserver.crt.443b2274 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1013 22:03:57.985116  487583 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/auto-200102/apiserver.crt.443b2274 ...
	I1013 22:03:57.985154  487583 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/auto-200102/apiserver.crt.443b2274: {Name:mk74c23f716571620a1007598ae871740882eb1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:03:57.985409  487583 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/auto-200102/apiserver.key.443b2274 ...
	I1013 22:03:57.985429  487583 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/auto-200102/apiserver.key.443b2274: {Name:mk194bd31c37146af9822ed9392ca6af9be4ed3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:03:57.985662  487583 certs.go:382] copying /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/auto-200102/apiserver.crt.443b2274 -> /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/auto-200102/apiserver.crt
	I1013 22:03:57.985848  487583 certs.go:386] copying /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/auto-200102/apiserver.key.443b2274 -> /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/auto-200102/apiserver.key
	I1013 22:03:57.985965  487583 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/auto-200102/proxy-client.key
	I1013 22:03:57.986011  487583 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/auto-200102/proxy-client.crt with IP's: []
	I1013 22:03:55.904621  477441 system_pods.go:86] 8 kube-system pods found
	I1013 22:03:55.904664  477441 system_pods.go:89] "coredns-66bc5c9577-5x8dn" [2b78411d-d81f-4b88-9a8d-921f7c26ec16] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 22:03:55.904673  477441 system_pods.go:89] "etcd-default-k8s-diff-port-505851" [aed8b3be-779b-41fa-a0a3-d935cdc6ad0b] Running
	I1013 22:03:55.904682  477441 system_pods.go:89] "kindnet-m5whc" [f794ce45-bb06-44ce-beae-bffe3ff9d2c0] Running
	I1013 22:03:55.904688  477441 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-505851" [d7c818e1-b20b-40aa-afe6-7032c378c841] Running
	I1013 22:03:55.904694  477441 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-505851" [d6f5cccc-8810-4862-9add-7319d03ca442] Running
	I1013 22:03:55.904699  477441 system_pods.go:89] "kube-proxy-27pnt" [3cb84f83-962c-4830-bdad-0084bc59a7c4] Running
	I1013 22:03:55.904704  477441 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-505851" [7481baab-00e9-4015-bf26-4e389a1bf472] Running
	I1013 22:03:55.904712  477441 system_pods.go:89] "storage-provisioner" [2b8d56b5-894f-44d4-8b07-d3507c981fc0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 22:03:55.904741  477441 retry.go:31] will retry after 412.348385ms: missing components: kube-dns
	I1013 22:03:56.321704  477441 system_pods.go:86] 8 kube-system pods found
	I1013 22:03:56.321754  477441 system_pods.go:89] "coredns-66bc5c9577-5x8dn" [2b78411d-d81f-4b88-9a8d-921f7c26ec16] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 22:03:56.321764  477441 system_pods.go:89] "etcd-default-k8s-diff-port-505851" [aed8b3be-779b-41fa-a0a3-d935cdc6ad0b] Running
	I1013 22:03:56.321778  477441 system_pods.go:89] "kindnet-m5whc" [f794ce45-bb06-44ce-beae-bffe3ff9d2c0] Running
	I1013 22:03:56.321785  477441 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-505851" [d7c818e1-b20b-40aa-afe6-7032c378c841] Running
	I1013 22:03:56.321795  477441 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-505851" [d6f5cccc-8810-4862-9add-7319d03ca442] Running
	I1013 22:03:56.321801  477441 system_pods.go:89] "kube-proxy-27pnt" [3cb84f83-962c-4830-bdad-0084bc59a7c4] Running
	I1013 22:03:56.321809  477441 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-505851" [7481baab-00e9-4015-bf26-4e389a1bf472] Running
	I1013 22:03:56.321818  477441 system_pods.go:89] "storage-provisioner" [2b8d56b5-894f-44d4-8b07-d3507c981fc0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 22:03:56.321842  477441 retry.go:31] will retry after 533.406261ms: missing components: kube-dns
	I1013 22:03:56.860110  477441 system_pods.go:86] 8 kube-system pods found
	I1013 22:03:56.860143  477441 system_pods.go:89] "coredns-66bc5c9577-5x8dn" [2b78411d-d81f-4b88-9a8d-921f7c26ec16] Running
	I1013 22:03:56.860151  477441 system_pods.go:89] "etcd-default-k8s-diff-port-505851" [aed8b3be-779b-41fa-a0a3-d935cdc6ad0b] Running
	I1013 22:03:56.860159  477441 system_pods.go:89] "kindnet-m5whc" [f794ce45-bb06-44ce-beae-bffe3ff9d2c0] Running
	I1013 22:03:56.860164  477441 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-505851" [d7c818e1-b20b-40aa-afe6-7032c378c841] Running
	I1013 22:03:56.860170  477441 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-505851" [d6f5cccc-8810-4862-9add-7319d03ca442] Running
	I1013 22:03:56.860175  477441 system_pods.go:89] "kube-proxy-27pnt" [3cb84f83-962c-4830-bdad-0084bc59a7c4] Running
	I1013 22:03:56.860180  477441 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-505851" [7481baab-00e9-4015-bf26-4e389a1bf472] Running
	I1013 22:03:56.860185  477441 system_pods.go:89] "storage-provisioner" [2b8d56b5-894f-44d4-8b07-d3507c981fc0] Running
	I1013 22:03:56.860197  477441 system_pods.go:126] duration metric: took 1.504300371s to wait for k8s-apps to be running ...
	I1013 22:03:56.860211  477441 system_svc.go:44] waiting for kubelet service to be running ....
	I1013 22:03:56.860265  477441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 22:03:56.876876  477441 system_svc.go:56] duration metric: took 16.654755ms WaitForService to wait for kubelet
	I1013 22:03:56.876916  477441 kubeadm.go:586] duration metric: took 12.772822907s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 22:03:56.876941  477441 node_conditions.go:102] verifying NodePressure condition ...
	I1013 22:03:56.879720  477441 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1013 22:03:56.879749  477441 node_conditions.go:123] node cpu capacity is 8
	I1013 22:03:56.879764  477441 node_conditions.go:105] duration metric: took 2.817207ms to run NodePressure ...
	I1013 22:03:56.879776  477441 start.go:241] waiting for startup goroutines ...
	I1013 22:03:56.879782  477441 start.go:246] waiting for cluster config update ...
	I1013 22:03:56.879793  477441 start.go:255] writing updated cluster config ...
	I1013 22:03:56.880082  477441 ssh_runner.go:195] Run: rm -f paused
	I1013 22:03:56.884439  477441 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 22:03:56.888364  477441 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-5x8dn" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:03:56.892803  477441 pod_ready.go:94] pod "coredns-66bc5c9577-5x8dn" is "Ready"
	I1013 22:03:56.892837  477441 pod_ready.go:86] duration metric: took 4.448764ms for pod "coredns-66bc5c9577-5x8dn" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:03:56.895141  477441 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-505851" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:03:56.899825  477441 pod_ready.go:94] pod "etcd-default-k8s-diff-port-505851" is "Ready"
	I1013 22:03:56.899856  477441 pod_ready.go:86] duration metric: took 4.683785ms for pod "etcd-default-k8s-diff-port-505851" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:03:56.901925  477441 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-505851" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:03:56.906108  477441 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-505851" is "Ready"
	I1013 22:03:56.906138  477441 pod_ready.go:86] duration metric: took 4.191264ms for pod "kube-apiserver-default-k8s-diff-port-505851" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:03:56.908355  477441 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-505851" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:03:57.289538  477441 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-505851" is "Ready"
	I1013 22:03:57.289578  477441 pod_ready.go:86] duration metric: took 381.195592ms for pod "kube-controller-manager-default-k8s-diff-port-505851" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:03:57.489112  477441 pod_ready.go:83] waiting for pod "kube-proxy-27pnt" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:03:57.889231  477441 pod_ready.go:94] pod "kube-proxy-27pnt" is "Ready"
	I1013 22:03:57.889265  477441 pod_ready.go:86] duration metric: took 400.118576ms for pod "kube-proxy-27pnt" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:03:58.090857  477441 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-505851" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:03:58.489541  477441 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-505851" is "Ready"
	I1013 22:03:58.489575  477441 pod_ready.go:86] duration metric: took 398.683022ms for pod "kube-scheduler-default-k8s-diff-port-505851" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:03:58.489590  477441 pod_ready.go:40] duration metric: took 1.605113886s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 22:03:58.540564  477441 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1013 22:03:58.542840  477441 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-505851" cluster and "default" namespace by default
	I1013 22:03:58.039166  487583 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/auto-200102/proxy-client.crt ...
	I1013 22:03:58.039192  487583 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/auto-200102/proxy-client.crt: {Name:mkee870077425074c906eaa1754c70c39dd1609a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:03:58.039371  487583 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/auto-200102/proxy-client.key ...
	I1013 22:03:58.039389  487583 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/auto-200102/proxy-client.key: {Name:mk655d2a19977110d0e11b0a3d6a87cbec7dcec7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:03:58.039571  487583 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/230929.pem (1338 bytes)
	W1013 22:03:58.039607  487583 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-226873/.minikube/certs/230929_empty.pem, impossibly tiny 0 bytes
	I1013 22:03:58.039615  487583 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca-key.pem (1675 bytes)
	I1013 22:03:58.039634  487583 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem (1078 bytes)
	I1013 22:03:58.039654  487583 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/cert.pem (1123 bytes)
	I1013 22:03:58.039687  487583 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/key.pem (1679 bytes)
	I1013 22:03:58.039725  487583 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/files/etc/ssl/certs/2309292.pem (1708 bytes)
	I1013 22:03:58.040290  487583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1013 22:03:58.059358  487583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1013 22:03:58.084643  487583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1013 22:03:58.109884  487583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1013 22:03:58.133290  487583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/auto-200102/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1013 22:03:58.153882  487583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/auto-200102/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1013 22:03:58.175490  487583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/auto-200102/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1013 22:03:58.194357  487583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/auto-200102/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1013 22:03:58.213174  487583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1013 22:03:58.234011  487583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/certs/230929.pem --> /usr/share/ca-certificates/230929.pem (1338 bytes)
	I1013 22:03:58.253249  487583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/files/etc/ssl/certs/2309292.pem --> /usr/share/ca-certificates/2309292.pem (1708 bytes)
	I1013 22:03:58.273422  487583 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1013 22:03:58.288505  487583 ssh_runner.go:195] Run: openssl version
	I1013 22:03:58.295475  487583 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2309292.pem && ln -fs /usr/share/ca-certificates/2309292.pem /etc/ssl/certs/2309292.pem"
	I1013 22:03:58.305197  487583 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2309292.pem
	I1013 22:03:58.309361  487583 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 13 21:24 /usr/share/ca-certificates/2309292.pem
	I1013 22:03:58.309420  487583 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2309292.pem
	I1013 22:03:58.350087  487583 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2309292.pem /etc/ssl/certs/3ec20f2e.0"
	I1013 22:03:58.359841  487583 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1013 22:03:58.369253  487583 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:03:58.373660  487583 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 13 21:18 /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:03:58.373713  487583 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:03:58.410477  487583 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1013 22:03:58.420091  487583 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/230929.pem && ln -fs /usr/share/ca-certificates/230929.pem /etc/ssl/certs/230929.pem"
	I1013 22:03:58.429157  487583 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/230929.pem
	I1013 22:03:58.433163  487583 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 13 21:24 /usr/share/ca-certificates/230929.pem
	I1013 22:03:58.433218  487583 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/230929.pem
	I1013 22:03:58.472582  487583 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/230929.pem /etc/ssl/certs/51391683.0"
	I1013 22:03:58.482625  487583 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1013 22:03:58.486464  487583 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1013 22:03:58.486515  487583 kubeadm.go:400] StartCluster: {Name:auto-200102 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-200102 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 22:03:58.486602  487583 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 22:03:58.486644  487583 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 22:03:58.518129  487583 cri.go:89] found id: ""
	I1013 22:03:58.518203  487583 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1013 22:03:58.528420  487583 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1013 22:03:58.537447  487583 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1013 22:03:58.537509  487583 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1013 22:03:58.545975  487583 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1013 22:03:58.546041  487583 kubeadm.go:157] found existing configuration files:
	
	I1013 22:03:58.546097  487583 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1013 22:03:58.555032  487583 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1013 22:03:58.555095  487583 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1013 22:03:58.566394  487583 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1013 22:03:58.576170  487583 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1013 22:03:58.576233  487583 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1013 22:03:58.585668  487583 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1013 22:03:58.594090  487583 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1013 22:03:58.594152  487583 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1013 22:03:58.603321  487583 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1013 22:03:58.612260  487583 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1013 22:03:58.612342  487583 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1013 22:03:58.620752  487583 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1013 22:03:58.668965  487583 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1013 22:03:58.669079  487583 kubeadm.go:318] [preflight] Running pre-flight checks
	I1013 22:03:58.694495  487583 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1013 22:03:58.694619  487583 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1013 22:03:58.694675  487583 kubeadm.go:318] OS: Linux
	I1013 22:03:58.694719  487583 kubeadm.go:318] CGROUPS_CPU: enabled
	I1013 22:03:58.694794  487583 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1013 22:03:58.694890  487583 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1013 22:03:58.694986  487583 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1013 22:03:58.695084  487583 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1013 22:03:58.695165  487583 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1013 22:03:58.695250  487583 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1013 22:03:58.695324  487583 kubeadm.go:318] CGROUPS_IO: enabled
	I1013 22:03:58.763338  487583 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1013 22:03:58.763475  487583 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1013 22:03:58.763592  487583 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1013 22:03:58.771259  487583 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1013 22:03:55.287236  484490 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.000886624s
	I1013 22:03:55.290383  484490 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1013 22:03:55.290549  484490 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1013 22:03:55.290705  484490 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1013 22:03:55.290828  484490 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1013 22:03:56.796341  484490 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.505897974s
	I1013 22:03:58.031333  484490 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.740905764s
	I1013 22:03:59.792828  484490 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.502441565s
	I1013 22:03:59.807208  484490 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1013 22:03:59.821040  484490 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1013 22:03:59.832233  484490 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1013 22:03:59.832557  484490 kubeadm.go:318] [mark-control-plane] Marking the node newest-cni-843554 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1013 22:03:59.841625  484490 kubeadm.go:318] [bootstrap-token] Using token: qujhya.lp2l688dgho08i02
	I1013 22:03:59.842861  484490 out.go:252]   - Configuring RBAC rules ...
	I1013 22:03:59.843045  484490 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1013 22:03:59.848430  484490 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1013 22:03:59.854539  484490 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1013 22:03:59.857466  484490 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1013 22:03:59.860477  484490 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1013 22:03:59.863374  484490 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1013 22:04:00.199711  484490 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1013 22:04:00.617233  484490 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1013 22:04:01.200369  484490 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1013 22:04:01.201630  484490 kubeadm.go:318] 
	I1013 22:04:01.201749  484490 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1013 22:04:01.201760  484490 kubeadm.go:318] 
	I1013 22:04:01.201861  484490 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1013 22:04:01.201877  484490 kubeadm.go:318] 
	I1013 22:04:01.201913  484490 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1013 22:04:01.201976  484490 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1013 22:04:01.202065  484490 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1013 22:04:01.202074  484490 kubeadm.go:318] 
	I1013 22:04:01.202147  484490 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1013 22:04:01.202158  484490 kubeadm.go:318] 
	I1013 22:04:01.202214  484490 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1013 22:04:01.202224  484490 kubeadm.go:318] 
	I1013 22:04:01.202327  484490 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1013 22:04:01.202403  484490 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1013 22:04:01.202463  484490 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1013 22:04:01.202469  484490 kubeadm.go:318] 
	I1013 22:04:01.202573  484490 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1013 22:04:01.202685  484490 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1013 22:04:01.202692  484490 kubeadm.go:318] 
	I1013 22:04:01.202812  484490 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token qujhya.lp2l688dgho08i02 \
	I1013 22:04:01.202954  484490 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:00bf5d6d0f4ef7bd9334b23e90d8fd2b7e452995fe6a96d4fd7aebd6540a9956 \
	I1013 22:04:01.202980  484490 kubeadm.go:318] 	--control-plane 
	I1013 22:04:01.202984  484490 kubeadm.go:318] 
	I1013 22:04:01.203115  484490 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1013 22:04:01.203129  484490 kubeadm.go:318] 
	I1013 22:04:01.203248  484490 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token qujhya.lp2l688dgho08i02 \
	I1013 22:04:01.203347  484490 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:00bf5d6d0f4ef7bd9334b23e90d8fd2b7e452995fe6a96d4fd7aebd6540a9956 
	I1013 22:04:01.207191  484490 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1013 22:04:01.207356  484490 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1013 22:04:01.207384  484490 cni.go:84] Creating CNI manager for ""
	I1013 22:04:01.207398  484490 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 22:04:01.209351  484490 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1013 22:03:57.398768  476377 node_ready.go:57] node "embed-certs-521669" has "Ready":"False" status (will retry)
	W1013 22:03:59.897919  476377 node_ready.go:57] node "embed-certs-521669" has "Ready":"False" status (will retry)
	I1013 22:03:58.773424  487583 out.go:252]   - Generating certificates and keys ...
	I1013 22:03:58.773529  487583 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1013 22:03:58.773601  487583 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1013 22:03:58.929208  487583 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1013 22:03:59.149233  487583 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1013 22:03:59.436094  487583 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1013 22:03:59.839862  487583 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1013 22:04:00.016670  487583 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1013 22:04:00.016898  487583 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [auto-200102 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1013 22:04:00.136464  487583 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1013 22:04:00.136727  487583 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [auto-200102 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1013 22:04:00.475165  487583 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1013 22:04:01.108386  487583 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1013 22:04:01.372861  487583 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1013 22:04:01.373005  487583 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1013 22:04:01.483598  487583 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1013 22:04:01.771328  487583 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1013 22:04:02.020984  487583 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1013 22:04:02.485429  487583 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1013 22:04:02.805340  487583 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1013 22:04:02.805811  487583 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1013 22:04:02.811547  487583 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1013 22:04:02.813104  487583 out.go:252]   - Booting up control plane ...
	I1013 22:04:02.813188  487583 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1013 22:04:02.813249  487583 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1013 22:04:02.813887  487583 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1013 22:04:02.840786  487583 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1013 22:04:02.840946  487583 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1013 22:04:02.849340  487583 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1013 22:04:02.849635  487583 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1013 22:04:02.849700  487583 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1013 22:04:02.956750  487583 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1013 22:04:02.956896  487583 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1013 22:04:01.210712  484490 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1013 22:04:01.215507  484490 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1013 22:04:01.215531  484490 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1013 22:04:01.229960  484490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1013 22:04:01.461985  484490 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1013 22:04:01.462089  484490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:04:01.462144  484490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-843554 minikube.k8s.io/updated_at=2025_10_13T22_04_01_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=18273cc699fc357fbf4f93654efe4966698a9f22 minikube.k8s.io/name=newest-cni-843554 minikube.k8s.io/primary=true
	I1013 22:04:01.546592  484490 ops.go:34] apiserver oom_adj: -16
	I1013 22:04:01.546647  484490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:04:02.047002  484490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:04:02.546699  484490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:04:03.047144  484490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:04:03.546720  484490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:04:04.047188  484490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:04:04.546953  484490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:04:05.046752  484490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:04:05.547270  484490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:04:06.046904  484490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:04:06.132724  484490 kubeadm.go:1113] duration metric: took 4.670694614s to wait for elevateKubeSystemPrivileges
	I1013 22:04:06.132764  484490 kubeadm.go:402] duration metric: took 16.361347762s to StartCluster
	I1013 22:04:06.132788  484490 settings.go:142] acquiring lock: {Name:mk13008e3b2fce0e368bddbf00d43b8340210d33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:04:06.132880  484490 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-226873/kubeconfig
	I1013 22:04:06.134776  484490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/kubeconfig: {Name:mk2f336b13d09ff6e6da9e86905651541ce51ca0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:04:06.135092  484490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1013 22:04:06.135107  484490 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1013 22:04:06.135199  484490 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1013 22:04:06.135293  484490 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-843554"
	I1013 22:04:06.135310  484490 config.go:182] Loaded profile config "newest-cni-843554": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:04:06.135321  484490 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-843554"
	I1013 22:04:06.135361  484490 host.go:66] Checking if "newest-cni-843554" exists ...
	I1013 22:04:06.135314  484490 addons.go:69] Setting default-storageclass=true in profile "newest-cni-843554"
	I1013 22:04:06.135419  484490 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-843554"
	I1013 22:04:06.135832  484490 cli_runner.go:164] Run: docker container inspect newest-cni-843554 --format={{.State.Status}}
	I1013 22:04:06.136032  484490 cli_runner.go:164] Run: docker container inspect newest-cni-843554 --format={{.State.Status}}
	I1013 22:04:06.137115  484490 out.go:179] * Verifying Kubernetes components...
	I1013 22:04:06.141173  484490 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:04:06.161027  484490 addons.go:238] Setting addon default-storageclass=true in "newest-cni-843554"
	I1013 22:04:06.161078  484490 host.go:66] Checking if "newest-cni-843554" exists ...
	I1013 22:04:06.161553  484490 cli_runner.go:164] Run: docker container inspect newest-cni-843554 --format={{.State.Status}}
	I1013 22:04:06.162361  484490 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1013 22:04:02.397017  476377 node_ready.go:57] node "embed-certs-521669" has "Ready":"False" status (will retry)
	W1013 22:04:04.897852  476377 node_ready.go:57] node "embed-certs-521669" has "Ready":"False" status (will retry)
	I1013 22:04:06.164230  484490 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 22:04:06.164253  484490 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1013 22:04:06.164305  484490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-843554
	I1013 22:04:06.198126  484490 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1013 22:04:06.198154  484490 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1013 22:04:06.198219  484490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-843554
	I1013 22:04:06.207029  484490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/newest-cni-843554/id_rsa Username:docker}
	I1013 22:04:06.231679  484490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/newest-cni-843554/id_rsa Username:docker}
	I1013 22:04:06.254188  484490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1013 22:04:06.316606  484490 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 22:04:06.362597  484490 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1013 22:04:06.362782  484490 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 22:04:06.481314  484490 start.go:976] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1013 22:04:06.483112  484490 api_server.go:52] waiting for apiserver process to appear ...
	I1013 22:04:06.483175  484490 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 22:04:06.754622  484490 api_server.go:72] duration metric: took 619.477155ms to wait for apiserver process to appear ...
	I1013 22:04:06.754648  484490 api_server.go:88] waiting for apiserver healthz status ...
	I1013 22:04:06.754678  484490 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1013 22:04:06.756049  484490 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1013 22:04:06.757861  484490 addons.go:514] duration metric: took 622.655495ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I1013 22:04:06.762851  484490 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1013 22:04:06.764189  484490 api_server.go:141] control plane version: v1.34.1
	I1013 22:04:06.764221  484490 api_server.go:131] duration metric: took 9.565104ms to wait for apiserver health ...
	I1013 22:04:06.764233  484490 system_pods.go:43] waiting for kube-system pods to appear ...
	I1013 22:04:06.769551  484490 system_pods.go:59] 8 kube-system pods found
	I1013 22:04:06.769617  484490 system_pods.go:61] "coredns-66bc5c9577-br2pb" [531000de-cace-4ffd-ae65-51208d0783c5] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1013 22:04:06.769627  484490 system_pods.go:61] "etcd-newest-cni-843554" [b1660b76-27be-45d2-89da-274c5320b389] Running
	I1013 22:04:06.769635  484490 system_pods.go:61] "kindnet-x9k2d" [dda7fe66-1403-4701-b8af-1b3502336d9d] Running
	I1013 22:04:06.769642  484490 system_pods.go:61] "kube-apiserver-newest-cni-843554" [0c7381fb-b918-4708-afa7-ad537bf1c3d0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1013 22:04:06.769648  484490 system_pods.go:61] "kube-controller-manager-newest-cni-843554" [b31669bc-26b0-45c5-aae7-2e7132dcfe60] Running
	I1013 22:04:06.769653  484490 system_pods.go:61] "kube-proxy-zgkgm" [f6dbeddd-feee-4c6f-a51b-add1412128a2] Running
	I1013 22:04:06.769675  484490 system_pods.go:61] "kube-scheduler-newest-cni-843554" [dea0fcee-f00b-4190-a4fa-4bc097a9f7d0] Running
	I1013 22:04:06.769682  484490 system_pods.go:61] "storage-provisioner" [2f536c11-96c7-4cb7-8128-eb53b3d44ce8] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1013 22:04:06.769695  484490 system_pods.go:74] duration metric: took 5.44993ms to wait for pod list to return data ...
	I1013 22:04:06.769707  484490 default_sa.go:34] waiting for default service account to be created ...
	I1013 22:04:06.774681  484490 default_sa.go:45] found service account: "default"
	I1013 22:04:06.774776  484490 default_sa.go:55] duration metric: took 5.056823ms for default service account to be created ...
	I1013 22:04:06.774804  484490 kubeadm.go:586] duration metric: took 639.66415ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1013 22:04:06.774850  484490 node_conditions.go:102] verifying NodePressure condition ...
	I1013 22:04:06.784169  484490 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1013 22:04:06.784276  484490 node_conditions.go:123] node cpu capacity is 8
	I1013 22:04:06.784297  484490 node_conditions.go:105] duration metric: took 9.424109ms to run NodePressure ...
	I1013 22:04:06.784345  484490 start.go:241] waiting for startup goroutines ...
	I1013 22:04:06.986332  484490 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-843554" context rescaled to 1 replicas
	I1013 22:04:06.986377  484490 start.go:246] waiting for cluster config update ...
	I1013 22:04:06.986392  484490 start.go:255] writing updated cluster config ...
	I1013 22:04:06.986733  484490 ssh_runner.go:195] Run: rm -f paused
	I1013 22:04:07.047581  484490 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1013 22:04:07.048837  484490 out.go:179] * Done! kubectl is now configured to use "newest-cni-843554" cluster and "default" namespace by default
	I1013 22:04:03.958326  487583 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001671784s
	I1013 22:04:03.961163  487583 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1013 22:04:03.961296  487583 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1013 22:04:03.961423  487583 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1013 22:04:03.961529  487583 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1013 22:04:05.922324  487583 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 1.960993368s
	I1013 22:04:06.114804  487583 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.151982439s
	I1013 22:04:07.463750  487583 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 3.502322214s
	I1013 22:04:07.481929  487583 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1013 22:04:07.495056  487583 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1013 22:04:07.506674  487583 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1013 22:04:07.506884  487583 kubeadm.go:318] [mark-control-plane] Marking the node auto-200102 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1013 22:04:07.516714  487583 kubeadm.go:318] [bootstrap-token] Using token: dl2d9l.iygr7ej5tcfc30w5
	
	
	==> CRI-O <==
	Oct 13 22:04:06 newest-cni-843554 crio[780]: time="2025-10-13T22:04:06.319962584Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:04:06 newest-cni-843554 crio[780]: time="2025-10-13T22:04:06.320599248Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=a27e983a-b9c1-40fe-8e25-4cd8a2dbd304 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 13 22:04:06 newest-cni-843554 crio[780]: time="2025-10-13T22:04:06.324331302Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 13 22:04:06 newest-cni-843554 crio[780]: time="2025-10-13T22:04:06.324951907Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=0dc48938-29b6-4e30-9f9c-123744b6dc08 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 13 22:04:06 newest-cni-843554 crio[780]: time="2025-10-13T22:04:06.32557536Z" level=info msg="Ran pod sandbox cc36a35ef24aca5367674dbec4fb02e69f48a96cc2ce87689b73dc27361f0ccc with infra container: kube-system/kindnet-x9k2d/POD" id=a27e983a-b9c1-40fe-8e25-4cd8a2dbd304 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 13 22:04:06 newest-cni-843554 crio[780]: time="2025-10-13T22:04:06.329804701Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 13 22:04:06 newest-cni-843554 crio[780]: time="2025-10-13T22:04:06.331470559Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=396332ac-d230-46ad-be88-d88c57c306ab name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:04:06 newest-cni-843554 crio[780]: time="2025-10-13T22:04:06.333589372Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=39b9771c-af9b-48d8-8041-fd679328c599 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:04:06 newest-cni-843554 crio[780]: time="2025-10-13T22:04:06.333610049Z" level=info msg="Ran pod sandbox 347a06efffbfe5b840f3b766ec4972182d39963c64dd2e4570400e4125fdbe50 with infra container: kube-system/kube-proxy-zgkgm/POD" id=0dc48938-29b6-4e30-9f9c-123744b6dc08 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 13 22:04:06 newest-cni-843554 crio[780]: time="2025-10-13T22:04:06.336899965Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=d54af676-f8c6-45d3-b7da-bd734bf17e2d name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:04:06 newest-cni-843554 crio[780]: time="2025-10-13T22:04:06.340759676Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=ec6fc1ba-1b2d-4a08-b678-17f3ca6443ea name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:04:06 newest-cni-843554 crio[780]: time="2025-10-13T22:04:06.341896413Z" level=info msg="Creating container: kube-system/kindnet-x9k2d/kindnet-cni" id=eb44f698-7822-4ac6-b9a8-6498f663187e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:04:06 newest-cni-843554 crio[780]: time="2025-10-13T22:04:06.342597639Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:04:06 newest-cni-843554 crio[780]: time="2025-10-13T22:04:06.344893169Z" level=info msg="Creating container: kube-system/kube-proxy-zgkgm/kube-proxy" id=38761305-c4a2-4a76-9791-86c0beacd97f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:04:06 newest-cni-843554 crio[780]: time="2025-10-13T22:04:06.346133936Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:04:06 newest-cni-843554 crio[780]: time="2025-10-13T22:04:06.347610813Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:04:06 newest-cni-843554 crio[780]: time="2025-10-13T22:04:06.348483955Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:04:06 newest-cni-843554 crio[780]: time="2025-10-13T22:04:06.357085822Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:04:06 newest-cni-843554 crio[780]: time="2025-10-13T22:04:06.357909056Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:04:06 newest-cni-843554 crio[780]: time="2025-10-13T22:04:06.387729657Z" level=info msg="Created container 26fe3cafb4e7fda6cb184894fd0612b5f3bc014701499c1835280b53924e4328: kube-system/kindnet-x9k2d/kindnet-cni" id=eb44f698-7822-4ac6-b9a8-6498f663187e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:04:06 newest-cni-843554 crio[780]: time="2025-10-13T22:04:06.389575234Z" level=info msg="Starting container: 26fe3cafb4e7fda6cb184894fd0612b5f3bc014701499c1835280b53924e4328" id=ba9d6cff-6762-42f7-8c67-dd996c540ae5 name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 22:04:06 newest-cni-843554 crio[780]: time="2025-10-13T22:04:06.390744251Z" level=info msg="Created container 2599f9ed5c0e2999bd950416e2fb8a36a6630d5573fd3b8b6b9a9e700371c0c2: kube-system/kube-proxy-zgkgm/kube-proxy" id=38761305-c4a2-4a76-9791-86c0beacd97f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:04:06 newest-cni-843554 crio[780]: time="2025-10-13T22:04:06.391555994Z" level=info msg="Starting container: 2599f9ed5c0e2999bd950416e2fb8a36a6630d5573fd3b8b6b9a9e700371c0c2" id=efe762f6-1dc4-4e79-ac80-301921f4a227 name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 22:04:06 newest-cni-843554 crio[780]: time="2025-10-13T22:04:06.392175152Z" level=info msg="Started container" PID=1585 containerID=26fe3cafb4e7fda6cb184894fd0612b5f3bc014701499c1835280b53924e4328 description=kube-system/kindnet-x9k2d/kindnet-cni id=ba9d6cff-6762-42f7-8c67-dd996c540ae5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=cc36a35ef24aca5367674dbec4fb02e69f48a96cc2ce87689b73dc27361f0ccc
	Oct 13 22:04:06 newest-cni-843554 crio[780]: time="2025-10-13T22:04:06.395323294Z" level=info msg="Started container" PID=1586 containerID=2599f9ed5c0e2999bd950416e2fb8a36a6630d5573fd3b8b6b9a9e700371c0c2 description=kube-system/kube-proxy-zgkgm/kube-proxy id=efe762f6-1dc4-4e79-ac80-301921f4a227 name=/runtime.v1.RuntimeService/StartContainer sandboxID=347a06efffbfe5b840f3b766ec4972182d39963c64dd2e4570400e4125fdbe50
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	2599f9ed5c0e2       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   2 seconds ago       Running             kube-proxy                0                   347a06efffbfe       kube-proxy-zgkgm                            kube-system
	26fe3cafb4e7f       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   2 seconds ago       Running             kindnet-cni               0                   cc36a35ef24ac       kindnet-x9k2d                               kube-system
	f6609ded1b953       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   13 seconds ago      Running             kube-scheduler            0                   ddfb1766c6863       kube-scheduler-newest-cni-843554            kube-system
	024bd2cff2ad6       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   13 seconds ago      Running             etcd                      0                   e99aa67fc71df       etcd-newest-cni-843554                      kube-system
	427c540ac62b2       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   13 seconds ago      Running             kube-apiserver            0                   c6e1a5d96884e       kube-apiserver-newest-cni-843554            kube-system
	68a8f83937c63       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   13 seconds ago      Running             kube-controller-manager   0                   36366f630f982       kube-controller-manager-newest-cni-843554   kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-843554
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-843554
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=18273cc699fc357fbf4f93654efe4966698a9f22
	                    minikube.k8s.io/name=newest-cni-843554
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_13T22_04_01_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 13 Oct 2025 22:03:58 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-843554
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 13 Oct 2025 22:04:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 13 Oct 2025 22:04:00 +0000   Mon, 13 Oct 2025 22:03:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 13 Oct 2025 22:04:00 +0000   Mon, 13 Oct 2025 22:03:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 13 Oct 2025 22:04:00 +0000   Mon, 13 Oct 2025 22:03:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Mon, 13 Oct 2025 22:04:00 +0000   Mon, 13 Oct 2025 22:03:55 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    newest-cni-843554
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863444Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863444Ki
	  pods:               110
	System Info:
	  Machine ID:                 66f4c9907daa2bafbf78246f68ed04ba
	  System UUID:                251c90b5-21d9-4e58-8666-2d86d8084a26
	  Boot ID:                    981b04c4-08c1-4321-af44-0fa889f5f1d8
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-843554                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         8s
	  kube-system                 kindnet-x9k2d                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      3s
	  kube-system                 kube-apiserver-newest-cni-843554             250m (3%)     0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-controller-manager-newest-cni-843554    200m (2%)     0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-proxy-zgkgm                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3s
	  kube-system                 kube-scheduler-newest-cni-843554             100m (1%)     0 (0%)      0 (0%)           0 (0%)         8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 1s                 kube-proxy       
	  Normal  Starting                 14s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13s (x8 over 14s)  kubelet          Node newest-cni-843554 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13s (x8 over 14s)  kubelet          Node newest-cni-843554 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13s (x8 over 14s)  kubelet          Node newest-cni-843554 status is now: NodeHasSufficientPID
	  Normal  Starting                 8s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8s                 kubelet          Node newest-cni-843554 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8s                 kubelet          Node newest-cni-843554 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8s                 kubelet          Node newest-cni-843554 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4s                 node-controller  Node newest-cni-843554 event: Registered Node newest-cni-843554 in Controller
	
	
	==> dmesg <==
	[  +0.099627] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.027042] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.612816] kauditd_printk_skb: 47 callbacks suppressed
	[Oct13 21:21] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +1.026013] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +1.023870] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +1.023931] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +1.023844] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +1.023883] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +2.047734] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +4.031606] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +8.511203] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[ +16.382178] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[Oct13 21:22] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	
	
	==> etcd [024bd2cff2ad61e54468491b51176430f0101fb4f2a9d8219b708d01049492ee] <==
	{"level":"warn","ts":"2025-10-13T22:03:57.324851Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:03:57.331857Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:03:57.341557Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:03:57.349154Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:03:57.356342Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:03:57.362985Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:03:57.369943Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:03:57.376844Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:03:57.383334Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:03:57.391292Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:03:57.399172Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:03:57.406632Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:03:57.413370Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:03:57.420959Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:03:57.431537Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:03:57.437573Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:03:57.445042Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:03:57.452029Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:03:57.465783Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:03:57.473380Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:03:57.480322Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:03:57.492234Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:03:57.499588Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:03:57.506296Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:03:57.560006Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40822","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 22:04:08 up  1:46,  0 user,  load average: 4.45, 3.64, 5.86
	Linux newest-cni-843554 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [26fe3cafb4e7fda6cb184894fd0612b5f3bc014701499c1835280b53924e4328] <==
	I1013 22:04:06.666377       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1013 22:04:06.666972       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1013 22:04:06.667186       1 main.go:148] setting mtu 1500 for CNI 
	I1013 22:04:06.667262       1 main.go:178] kindnetd IP family: "ipv4"
	I1013 22:04:06.667311       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-13T22:04:06Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1013 22:04:06.963091       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1013 22:04:06.963176       1 controller.go:381] "Waiting for informer caches to sync"
	I1013 22:04:06.963191       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1013 22:04:06.963436       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1013 22:04:07.262472       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1013 22:04:07.262499       1 metrics.go:72] Registering metrics
	I1013 22:04:07.262555       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [427c540ac62b2520531f3d4a796dda3718ff55e08cb5b50d2a2802835ff9b3e2] <==
	I1013 22:03:58.066831       1 controller.go:667] quota admission added evaluator for: namespaces
	I1013 22:03:58.068895       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1013 22:03:58.069027       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1013 22:03:58.069405       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1013 22:03:58.074676       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1013 22:03:58.075120       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1013 22:03:58.108444       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1013 22:03:58.272344       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1013 22:03:58.970501       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1013 22:03:58.974979       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1013 22:03:58.975075       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1013 22:03:59.513194       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1013 22:03:59.552508       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1013 22:03:59.675800       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1013 22:03:59.682162       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1013 22:03:59.683248       1 controller.go:667] quota admission added evaluator for: endpoints
	I1013 22:03:59.687413       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1013 22:03:59.989169       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1013 22:04:00.605668       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1013 22:04:00.616253       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1013 22:04:00.623342       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1013 22:04:05.244069       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1013 22:04:05.248529       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1013 22:04:05.897257       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1013 22:04:05.991732       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [68a8f83937c6381903de46864dbb4f7330638e91f1f955ec4dd619fed0bd2290] <==
	I1013 22:04:04.948286       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1013 22:04:04.955424       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1013 22:04:04.961725       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1013 22:04:04.988289       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1013 22:04:04.988436       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1013 22:04:04.988302       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1013 22:04:04.988631       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1013 22:04:04.988883       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1013 22:04:04.989050       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1013 22:04:04.989073       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1013 22:04:04.989086       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1013 22:04:04.989455       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1013 22:04:04.989484       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1013 22:04:04.989101       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1013 22:04:04.989935       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1013 22:04:04.990037       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1013 22:04:04.990194       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1013 22:04:04.990433       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1013 22:04:04.992646       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1013 22:04:04.994983       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1013 22:04:05.000335       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1013 22:04:05.000891       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1013 22:04:05.006051       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1013 22:04:05.014377       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1013 22:04:05.018616       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [2599f9ed5c0e2999bd950416e2fb8a36a6630d5573fd3b8b6b9a9e700371c0c2] <==
	I1013 22:04:06.449244       1 server_linux.go:53] "Using iptables proxy"
	I1013 22:04:06.521772       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1013 22:04:06.622057       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1013 22:04:06.622119       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1013 22:04:06.622208       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1013 22:04:06.648077       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1013 22:04:06.648157       1 server_linux.go:132] "Using iptables Proxier"
	I1013 22:04:06.655401       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1013 22:04:06.655882       1 server.go:527] "Version info" version="v1.34.1"
	I1013 22:04:06.655919       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 22:04:06.657487       1 config.go:200] "Starting service config controller"
	I1013 22:04:06.657508       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1013 22:04:06.657540       1 config.go:403] "Starting serviceCIDR config controller"
	I1013 22:04:06.657546       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1013 22:04:06.657561       1 config.go:106] "Starting endpoint slice config controller"
	I1013 22:04:06.657567       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1013 22:04:06.659687       1 config.go:309] "Starting node config controller"
	I1013 22:04:06.659705       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1013 22:04:06.659713       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1013 22:04:06.758302       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1013 22:04:06.758358       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1013 22:04:06.758623       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [f6609ded1b9538b859a1a0d422c33ec1d56e3fa8d946e4ef71e99faa81a9b675] <==
	E1013 22:03:58.028832       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1013 22:03:58.029373       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1013 22:03:58.029431       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1013 22:03:58.029483       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1013 22:03:58.029558       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1013 22:03:58.029673       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1013 22:03:58.029777       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1013 22:03:58.029837       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1013 22:03:58.029883       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1013 22:03:58.029913       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1013 22:03:58.030019       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1013 22:03:58.030200       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1013 22:03:58.030246       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1013 22:03:58.865362       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1013 22:03:58.890761       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1013 22:03:58.891592       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1013 22:03:58.914239       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1013 22:03:58.972720       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1013 22:03:59.044686       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1013 22:03:59.074130       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1013 22:03:59.186504       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1013 22:03:59.229185       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1013 22:03:59.289384       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1013 22:03:59.367724       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1013 22:04:01.925274       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 13 22:04:00 newest-cni-843554 kubelet[1313]: I1013 22:04:00.631508    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f0d03d5f2bfd6ca355aeccc3df6ad575-kubeconfig\") pod \"kube-scheduler-newest-cni-843554\" (UID: \"f0d03d5f2bfd6ca355aeccc3df6ad575\") " pod="kube-system/kube-scheduler-newest-cni-843554"
	Oct 13 22:04:00 newest-cni-843554 kubelet[1313]: I1013 22:04:00.631532    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/eed854190c701f57b1fd13bdf0ea089b-k8s-certs\") pod \"kube-apiserver-newest-cni-843554\" (UID: \"eed854190c701f57b1fd13bdf0ea089b\") " pod="kube-system/kube-apiserver-newest-cni-843554"
	Oct 13 22:04:00 newest-cni-843554 kubelet[1313]: I1013 22:04:00.631547    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/86ad922aff6ecdc879f4c5bd3e5bd6ac-kubeconfig\") pod \"kube-controller-manager-newest-cni-843554\" (UID: \"86ad922aff6ecdc879f4c5bd3e5bd6ac\") " pod="kube-system/kube-controller-manager-newest-cni-843554"
	Oct 13 22:04:01 newest-cni-843554 kubelet[1313]: I1013 22:04:01.427008    1313 apiserver.go:52] "Watching apiserver"
	Oct 13 22:04:01 newest-cni-843554 kubelet[1313]: I1013 22:04:01.430570    1313 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 13 22:04:01 newest-cni-843554 kubelet[1313]: I1013 22:04:01.470276    1313 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-843554"
	Oct 13 22:04:01 newest-cni-843554 kubelet[1313]: I1013 22:04:01.470968    1313 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-843554"
	Oct 13 22:04:01 newest-cni-843554 kubelet[1313]: E1013 22:04:01.479224    1313 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-843554\" already exists" pod="kube-system/etcd-newest-cni-843554"
	Oct 13 22:04:01 newest-cni-843554 kubelet[1313]: E1013 22:04:01.483398    1313 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-843554\" already exists" pod="kube-system/kube-apiserver-newest-cni-843554"
	Oct 13 22:04:01 newest-cni-843554 kubelet[1313]: I1013 22:04:01.513832    1313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-843554" podStartSLOduration=1.513805091 podStartE2EDuration="1.513805091s" podCreationTimestamp="2025-10-13 22:04:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 22:04:01.502590244 +0000 UTC m=+1.142412428" watchObservedRunningTime="2025-10-13 22:04:01.513805091 +0000 UTC m=+1.153627272"
	Oct 13 22:04:01 newest-cni-843554 kubelet[1313]: I1013 22:04:01.527349    1313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-843554" podStartSLOduration=1.527326008 podStartE2EDuration="1.527326008s" podCreationTimestamp="2025-10-13 22:04:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 22:04:01.5140541 +0000 UTC m=+1.153876284" watchObservedRunningTime="2025-10-13 22:04:01.527326008 +0000 UTC m=+1.167148192"
	Oct 13 22:04:01 newest-cni-843554 kubelet[1313]: I1013 22:04:01.527478    1313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-843554" podStartSLOduration=1.527471159 podStartE2EDuration="1.527471159s" podCreationTimestamp="2025-10-13 22:04:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 22:04:01.526971119 +0000 UTC m=+1.166793303" watchObservedRunningTime="2025-10-13 22:04:01.527471159 +0000 UTC m=+1.167293356"
	Oct 13 22:04:01 newest-cni-843554 kubelet[1313]: I1013 22:04:01.554507    1313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-843554" podStartSLOduration=1.554486705 podStartE2EDuration="1.554486705s" podCreationTimestamp="2025-10-13 22:04:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 22:04:01.537878921 +0000 UTC m=+1.177701097" watchObservedRunningTime="2025-10-13 22:04:01.554486705 +0000 UTC m=+1.194308889"
	Oct 13 22:04:04 newest-cni-843554 kubelet[1313]: I1013 22:04:04.987350    1313 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 13 22:04:04 newest-cni-843554 kubelet[1313]: I1013 22:04:04.988258    1313 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 13 22:04:06 newest-cni-843554 kubelet[1313]: I1013 22:04:06.066075    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/dda7fe66-1403-4701-b8af-1b3502336d9d-cni-cfg\") pod \"kindnet-x9k2d\" (UID: \"dda7fe66-1403-4701-b8af-1b3502336d9d\") " pod="kube-system/kindnet-x9k2d"
	Oct 13 22:04:06 newest-cni-843554 kubelet[1313]: I1013 22:04:06.066121    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dda7fe66-1403-4701-b8af-1b3502336d9d-xtables-lock\") pod \"kindnet-x9k2d\" (UID: \"dda7fe66-1403-4701-b8af-1b3502336d9d\") " pod="kube-system/kindnet-x9k2d"
	Oct 13 22:04:06 newest-cni-843554 kubelet[1313]: I1013 22:04:06.066151    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jk7k\" (UniqueName: \"kubernetes.io/projected/dda7fe66-1403-4701-b8af-1b3502336d9d-kube-api-access-4jk7k\") pod \"kindnet-x9k2d\" (UID: \"dda7fe66-1403-4701-b8af-1b3502336d9d\") " pod="kube-system/kindnet-x9k2d"
	Oct 13 22:04:06 newest-cni-843554 kubelet[1313]: I1013 22:04:06.066175    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f6dbeddd-feee-4c6f-a51b-add1412128a2-kube-proxy\") pod \"kube-proxy-zgkgm\" (UID: \"f6dbeddd-feee-4c6f-a51b-add1412128a2\") " pod="kube-system/kube-proxy-zgkgm"
	Oct 13 22:04:06 newest-cni-843554 kubelet[1313]: I1013 22:04:06.066199    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f6dbeddd-feee-4c6f-a51b-add1412128a2-lib-modules\") pod \"kube-proxy-zgkgm\" (UID: \"f6dbeddd-feee-4c6f-a51b-add1412128a2\") " pod="kube-system/kube-proxy-zgkgm"
	Oct 13 22:04:06 newest-cni-843554 kubelet[1313]: I1013 22:04:06.066222    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f6dbeddd-feee-4c6f-a51b-add1412128a2-xtables-lock\") pod \"kube-proxy-zgkgm\" (UID: \"f6dbeddd-feee-4c6f-a51b-add1412128a2\") " pod="kube-system/kube-proxy-zgkgm"
	Oct 13 22:04:06 newest-cni-843554 kubelet[1313]: I1013 22:04:06.066247    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lnzp\" (UniqueName: \"kubernetes.io/projected/f6dbeddd-feee-4c6f-a51b-add1412128a2-kube-api-access-4lnzp\") pod \"kube-proxy-zgkgm\" (UID: \"f6dbeddd-feee-4c6f-a51b-add1412128a2\") " pod="kube-system/kube-proxy-zgkgm"
	Oct 13 22:04:06 newest-cni-843554 kubelet[1313]: I1013 22:04:06.066268    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dda7fe66-1403-4701-b8af-1b3502336d9d-lib-modules\") pod \"kindnet-x9k2d\" (UID: \"dda7fe66-1403-4701-b8af-1b3502336d9d\") " pod="kube-system/kindnet-x9k2d"
	Oct 13 22:04:06 newest-cni-843554 kubelet[1313]: I1013 22:04:06.514543    1313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zgkgm" podStartSLOduration=1.514517356 podStartE2EDuration="1.514517356s" podCreationTimestamp="2025-10-13 22:04:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 22:04:06.498676591 +0000 UTC m=+6.138498775" watchObservedRunningTime="2025-10-13 22:04:06.514517356 +0000 UTC m=+6.154339540"
	Oct 13 22:04:06 newest-cni-843554 kubelet[1313]: I1013 22:04:06.546100    1313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-x9k2d" podStartSLOduration=1.5460736769999999 podStartE2EDuration="1.546073677s" podCreationTimestamp="2025-10-13 22:04:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 22:04:06.531539444 +0000 UTC m=+6.171361629" watchObservedRunningTime="2025-10-13 22:04:06.546073677 +0000 UTC m=+6.185895862"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-843554 -n newest-cni-843554
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-843554 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-br2pb storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-843554 describe pod coredns-66bc5c9577-br2pb storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-843554 describe pod coredns-66bc5c9577-br2pb storage-provisioner: exit status 1 (74.485655ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-br2pb" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-843554 describe pod coredns-66bc5c9577-br2pb storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.41s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (6.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-843554 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p newest-cni-843554 --alsologtostderr -v=1: exit status 80 (2.417913393s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-843554 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 22:04:28.704280  497142 out.go:360] Setting OutFile to fd 1 ...
	I1013 22:04:28.704440  497142 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:04:28.704448  497142 out.go:374] Setting ErrFile to fd 2...
	I1013 22:04:28.704455  497142 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:04:28.704908  497142 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-226873/.minikube/bin
	I1013 22:04:28.705276  497142 out.go:368] Setting JSON to false
	I1013 22:04:28.705333  497142 mustload.go:65] Loading cluster: newest-cni-843554
	I1013 22:04:28.705854  497142 config.go:182] Loaded profile config "newest-cni-843554": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:04:28.706478  497142 cli_runner.go:164] Run: docker container inspect newest-cni-843554 --format={{.State.Status}}
	I1013 22:04:28.732688  497142 host.go:66] Checking if "newest-cni-843554" exists ...
	I1013 22:04:28.733235  497142 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 22:04:28.828064  497142 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:87 SystemTime:2025-10-13 22:04:28.814500001 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652166656 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1013 22:04:28.828876  497142 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1758198818-20370/minikube-v1.37.0-1758198818-20370-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1758198818-20370-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-843554 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1013 22:04:28.830773  497142 out.go:179] * Pausing node newest-cni-843554 ... 
	I1013 22:04:28.833150  497142 host.go:66] Checking if "newest-cni-843554" exists ...
	I1013 22:04:28.833519  497142 ssh_runner.go:195] Run: systemctl --version
	I1013 22:04:28.833575  497142 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-843554
	I1013 22:04:28.856250  497142 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/newest-cni-843554/id_rsa Username:docker}
	I1013 22:04:28.963931  497142 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 22:04:28.979175  497142 pause.go:52] kubelet running: true
	I1013 22:04:28.979265  497142 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1013 22:04:29.155734  497142 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1013 22:04:29.155840  497142 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1013 22:04:29.248475  497142 cri.go:89] found id: "06775b1276b3a84da4ec0e52de07dbe6eae776c5ea15d92e0dd6494cbf3f6044"
	I1013 22:04:29.248516  497142 cri.go:89] found id: "6f0c64a9efcd8758124efb0152cbec061434e054bea5bbeb08cc3dad76c5e6c3"
	I1013 22:04:29.248523  497142 cri.go:89] found id: "327444d3eb25bfb0fe644674fa48e8f1aa6d8b136234fba36590d6457f450dc4"
	I1013 22:04:29.248528  497142 cri.go:89] found id: "0b277c65787c450076a87589a9056f8e503435026df7a87ef8fcfd1f5fd85717"
	I1013 22:04:29.248532  497142 cri.go:89] found id: "1f033e42eeaf346d9e7d5c0daacca0dc86df0814805b2e603582086f4bf618cb"
	I1013 22:04:29.248543  497142 cri.go:89] found id: "be70fd932072f322d31cdee2e908984ed89c0b1dba75f984223cd5fb43d68c52"
	I1013 22:04:29.248548  497142 cri.go:89] found id: ""
	I1013 22:04:29.248611  497142 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 22:04:29.264796  497142 retry.go:31] will retry after 234.434469ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:04:29Z" level=error msg="open /run/runc: no such file or directory"
	I1013 22:04:29.500221  497142 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 22:04:29.521082  497142 pause.go:52] kubelet running: false
	I1013 22:04:29.521150  497142 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1013 22:04:29.700282  497142 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1013 22:04:29.700374  497142 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1013 22:04:29.791183  497142 cri.go:89] found id: "06775b1276b3a84da4ec0e52de07dbe6eae776c5ea15d92e0dd6494cbf3f6044"
	I1013 22:04:29.791230  497142 cri.go:89] found id: "6f0c64a9efcd8758124efb0152cbec061434e054bea5bbeb08cc3dad76c5e6c3"
	I1013 22:04:29.791261  497142 cri.go:89] found id: "327444d3eb25bfb0fe644674fa48e8f1aa6d8b136234fba36590d6457f450dc4"
	I1013 22:04:29.791266  497142 cri.go:89] found id: "0b277c65787c450076a87589a9056f8e503435026df7a87ef8fcfd1f5fd85717"
	I1013 22:04:29.791271  497142 cri.go:89] found id: "1f033e42eeaf346d9e7d5c0daacca0dc86df0814805b2e603582086f4bf618cb"
	I1013 22:04:29.791276  497142 cri.go:89] found id: "be70fd932072f322d31cdee2e908984ed89c0b1dba75f984223cd5fb43d68c52"
	I1013 22:04:29.791281  497142 cri.go:89] found id: ""
	I1013 22:04:29.791343  497142 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 22:04:29.807029  497142 retry.go:31] will retry after 393.564343ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:04:29Z" level=error msg="open /run/runc: no such file or directory"
	I1013 22:04:30.201323  497142 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 22:04:30.214896  497142 pause.go:52] kubelet running: false
	I1013 22:04:30.214949  497142 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1013 22:04:30.361618  497142 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1013 22:04:30.361711  497142 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1013 22:04:30.439506  497142 cri.go:89] found id: "06775b1276b3a84da4ec0e52de07dbe6eae776c5ea15d92e0dd6494cbf3f6044"
	I1013 22:04:30.439531  497142 cri.go:89] found id: "6f0c64a9efcd8758124efb0152cbec061434e054bea5bbeb08cc3dad76c5e6c3"
	I1013 22:04:30.439537  497142 cri.go:89] found id: "327444d3eb25bfb0fe644674fa48e8f1aa6d8b136234fba36590d6457f450dc4"
	I1013 22:04:30.439546  497142 cri.go:89] found id: "0b277c65787c450076a87589a9056f8e503435026df7a87ef8fcfd1f5fd85717"
	I1013 22:04:30.439550  497142 cri.go:89] found id: "1f033e42eeaf346d9e7d5c0daacca0dc86df0814805b2e603582086f4bf618cb"
	I1013 22:04:30.439556  497142 cri.go:89] found id: "be70fd932072f322d31cdee2e908984ed89c0b1dba75f984223cd5fb43d68c52"
	I1013 22:04:30.439560  497142 cri.go:89] found id: ""
	I1013 22:04:30.439609  497142 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 22:04:30.451941  497142 retry.go:31] will retry after 335.440586ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:04:30Z" level=error msg="open /run/runc: no such file or directory"
	I1013 22:04:30.788604  497142 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 22:04:30.804468  497142 pause.go:52] kubelet running: false
	I1013 22:04:30.804535  497142 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1013 22:04:30.936746  497142 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1013 22:04:30.936836  497142 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1013 22:04:31.025879  497142 cri.go:89] found id: "06775b1276b3a84da4ec0e52de07dbe6eae776c5ea15d92e0dd6494cbf3f6044"
	I1013 22:04:31.025904  497142 cri.go:89] found id: "6f0c64a9efcd8758124efb0152cbec061434e054bea5bbeb08cc3dad76c5e6c3"
	I1013 22:04:31.025909  497142 cri.go:89] found id: "327444d3eb25bfb0fe644674fa48e8f1aa6d8b136234fba36590d6457f450dc4"
	I1013 22:04:31.025913  497142 cri.go:89] found id: "0b277c65787c450076a87589a9056f8e503435026df7a87ef8fcfd1f5fd85717"
	I1013 22:04:31.025916  497142 cri.go:89] found id: "1f033e42eeaf346d9e7d5c0daacca0dc86df0814805b2e603582086f4bf618cb"
	I1013 22:04:31.025919  497142 cri.go:89] found id: "be70fd932072f322d31cdee2e908984ed89c0b1dba75f984223cd5fb43d68c52"
	I1013 22:04:31.025922  497142 cri.go:89] found id: ""
	I1013 22:04:31.025969  497142 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 22:04:31.042161  497142 out.go:203] 
	W1013 22:04:31.043608  497142 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:04:31Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:04:31Z" level=error msg="open /run/runc: no such file or directory"
	
	W1013 22:04:31.043636  497142 out.go:285] * 
	* 
	W1013 22:04:31.049794  497142 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1013 22:04:31.051153  497142 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p newest-cni-843554 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-843554
helpers_test.go:243: (dbg) docker inspect newest-cni-843554:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d26d618d283e713767743456467a02e98327a589917c92afd56b523d8910224c",
	        "Created": "2025-10-13T22:03:44.63390679Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 494216,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-13T22:04:17.93986727Z",
	            "FinishedAt": "2025-10-13T22:04:17.097932073Z"
	        },
	        "Image": "sha256:496f866fde942b6f85dc3d23428f27ed11a5cd69b522133c43b8dc97e7575c9e",
	        "ResolvConfPath": "/var/lib/docker/containers/d26d618d283e713767743456467a02e98327a589917c92afd56b523d8910224c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d26d618d283e713767743456467a02e98327a589917c92afd56b523d8910224c/hostname",
	        "HostsPath": "/var/lib/docker/containers/d26d618d283e713767743456467a02e98327a589917c92afd56b523d8910224c/hosts",
	        "LogPath": "/var/lib/docker/containers/d26d618d283e713767743456467a02e98327a589917c92afd56b523d8910224c/d26d618d283e713767743456467a02e98327a589917c92afd56b523d8910224c-json.log",
	        "Name": "/newest-cni-843554",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-843554:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-843554",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d26d618d283e713767743456467a02e98327a589917c92afd56b523d8910224c",
	                "LowerDir": "/var/lib/docker/overlay2/8117176ea132b2feb044432a5a52afef1a59a8eaae543faf8b6d4ada5437690c-init/diff:/var/lib/docker/overlay2/d6236b573fee274727d414fe7dfb0718c2f1a4b8ebed995b4196b3231a8d31a4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8117176ea132b2feb044432a5a52afef1a59a8eaae543faf8b6d4ada5437690c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8117176ea132b2feb044432a5a52afef1a59a8eaae543faf8b6d4ada5437690c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8117176ea132b2feb044432a5a52afef1a59a8eaae543faf8b6d4ada5437690c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-843554",
	                "Source": "/var/lib/docker/volumes/newest-cni-843554/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-843554",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-843554",
	                "name.minikube.sigs.k8s.io": "newest-cni-843554",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7ec7b822511faa57a71897b93c380f1fb68bfa8e59609bb619a8c8ea373e267d",
	            "SandboxKey": "/var/run/docker/netns/7ec7b822511f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-843554": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "de:df:6b:5b:8f:31",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "57a00d8bcb8b486fb836fa8e6ea8fe1361ab235dd6af3b3af1489d461e67a488",
	                    "EndpointID": "3b2eb541eff9553283498f0444a9fc758047590c6f08dc887a647976ffe3d3ac",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-843554",
	                        "d26d618d283e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-843554 -n newest-cni-843554
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-843554 -n newest-cni-843554: exit status 2 (350.211015ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-843554 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-843554 logs -n 25: (1.019238094s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p embed-certs-521669 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-521669           │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │ 13 Oct 25 22:04 UTC │
	│ delete  │ -p kubernetes-upgrade-050146                                                                                                                                                                                                                  │ kubernetes-upgrade-050146    │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │ 13 Oct 25 22:03 UTC │
	│ delete  │ -p disable-driver-mounts-659143                                                                                                                                                                                                               │ disable-driver-mounts-659143 │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │ 13 Oct 25 22:03 UTC │
	│ start   │ -p default-k8s-diff-port-505851 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-505851 │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │ 13 Oct 25 22:03 UTC │
	│ image   │ no-preload-080337 image list --format=json                                                                                                                                                                                                    │ no-preload-080337            │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │ 13 Oct 25 22:03 UTC │
	│ pause   │ -p no-preload-080337 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-080337            │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │                     │
	│ delete  │ -p no-preload-080337                                                                                                                                                                                                                          │ no-preload-080337            │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │ 13 Oct 25 22:03 UTC │
	│ start   │ -p cert-expiration-894101 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-894101       │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │ 13 Oct 25 22:03 UTC │
	│ delete  │ -p no-preload-080337                                                                                                                                                                                                                          │ no-preload-080337            │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │ 13 Oct 25 22:03 UTC │
	│ start   │ -p newest-cni-843554 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-843554            │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │ 13 Oct 25 22:04 UTC │
	│ delete  │ -p cert-expiration-894101                                                                                                                                                                                                                     │ cert-expiration-894101       │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │ 13 Oct 25 22:03 UTC │
	│ start   │ -p auto-200102 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-200102                  │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │ 13 Oct 25 22:04 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-505851 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-505851 │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-843554 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-843554            │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-505851 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-505851 │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │ 13 Oct 25 22:04 UTC │
	│ stop    │ -p newest-cni-843554 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-843554            │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │ 13 Oct 25 22:04 UTC │
	│ addons  │ enable dashboard -p newest-cni-843554 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-843554            │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │ 13 Oct 25 22:04 UTC │
	│ start   │ -p newest-cni-843554 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-843554            │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │ 13 Oct 25 22:04 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-505851 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-505851 │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │ 13 Oct 25 22:04 UTC │
	│ start   │ -p default-k8s-diff-port-505851 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-505851 │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │                     │
	│ ssh     │ -p auto-200102 pgrep -a kubelet                                                                                                                                                                                                               │ auto-200102                  │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │ 13 Oct 25 22:04 UTC │
	│ image   │ newest-cni-843554 image list --format=json                                                                                                                                                                                                    │ newest-cni-843554            │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │ 13 Oct 25 22:04 UTC │
	│ pause   │ -p newest-cni-843554 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-843554            │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-521669 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-521669           │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │                     │
	│ stop    │ -p embed-certs-521669 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-521669           │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 22:04:26
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 22:04:26.956859  496036 out.go:360] Setting OutFile to fd 1 ...
	I1013 22:04:26.957185  496036 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:04:26.957196  496036 out.go:374] Setting ErrFile to fd 2...
	I1013 22:04:26.957200  496036 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:04:26.957426  496036 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-226873/.minikube/bin
	I1013 22:04:26.958103  496036 out.go:368] Setting JSON to false
	I1013 22:04:26.959552  496036 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6415,"bootTime":1760386652,"procs":337,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1013 22:04:26.959670  496036 start.go:141] virtualization: kvm guest
	I1013 22:04:26.961750  496036 out.go:179] * [default-k8s-diff-port-505851] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1013 22:04:26.963349  496036 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 22:04:26.963411  496036 notify.go:220] Checking for updates...
	I1013 22:04:26.965760  496036 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 22:04:26.967036  496036 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-226873/kubeconfig
	I1013 22:04:26.968402  496036 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-226873/.minikube
	I1013 22:04:26.969929  496036 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1013 22:04:26.971386  496036 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 22:04:26.973194  496036 config.go:182] Loaded profile config "default-k8s-diff-port-505851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:04:26.973733  496036 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 22:04:26.999446  496036 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1013 22:04:26.999551  496036 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 22:04:27.071876  496036 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-13 22:04:27.059169849 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652166656 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1013 22:04:27.072057  496036 docker.go:318] overlay module found
	I1013 22:04:27.073848  496036 out.go:179] * Using the docker driver based on existing profile
	W1013 22:04:24.585424  487583 node_ready.go:57] node "auto-200102" has "Ready":"False" status (will retry)
	I1013 22:04:25.084904  487583 node_ready.go:49] node "auto-200102" is "Ready"
	I1013 22:04:25.084967  487583 node_ready.go:38] duration metric: took 11.503181949s for node "auto-200102" to be "Ready" ...
	I1013 22:04:25.085065  487583 api_server.go:52] waiting for apiserver process to appear ...
	I1013 22:04:25.085136  487583 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 22:04:25.100833  487583 api_server.go:72] duration metric: took 11.780125697s to wait for apiserver process to appear ...
	I1013 22:04:25.100936  487583 api_server.go:88] waiting for apiserver healthz status ...
	I1013 22:04:25.100960  487583 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1013 22:04:25.108756  487583 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1013 22:04:25.110322  487583 api_server.go:141] control plane version: v1.34.1
	I1013 22:04:25.110346  487583 api_server.go:131] duration metric: took 9.402756ms to wait for apiserver health ...
	I1013 22:04:25.110356  487583 system_pods.go:43] waiting for kube-system pods to appear ...
	I1013 22:04:25.115120  487583 system_pods.go:59] 8 kube-system pods found
	I1013 22:04:25.115171  487583 system_pods.go:61] "coredns-66bc5c9577-sdbk9" [8d36ca24-a3ba-4fbe-a653-00be0927dc3d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 22:04:25.115184  487583 system_pods.go:61] "etcd-auto-200102" [59039ed7-e15a-407e-841e-30a86d7b903f] Running
	I1013 22:04:25.115199  487583 system_pods.go:61] "kindnet-c9psd" [3b7c7e30-9d46-488f-bff2-977f91619b90] Running
	I1013 22:04:25.115205  487583 system_pods.go:61] "kube-apiserver-auto-200102" [1ebbd1d6-405c-44e8-a237-feb0705cc530] Running
	I1013 22:04:25.115211  487583 system_pods.go:61] "kube-controller-manager-auto-200102" [78c87a54-7514-4def-8203-3dbcd916a373] Running
	I1013 22:04:25.115222  487583 system_pods.go:61] "kube-proxy-m6qcc" [432fd165-7595-4ef5-b34b-2183518251e0] Running
	I1013 22:04:25.115227  487583 system_pods.go:61] "kube-scheduler-auto-200102" [d6496262-fff5-48e6-821f-32513cda17fc] Running
	I1013 22:04:25.115237  487583 system_pods.go:61] "storage-provisioner" [056b0c4f-6bc9-4461-b3da-18021518efe9] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 22:04:25.115246  487583 system_pods.go:74] duration metric: took 4.883205ms to wait for pod list to return data ...
	I1013 22:04:25.115261  487583 default_sa.go:34] waiting for default service account to be created ...
	I1013 22:04:25.118286  487583 default_sa.go:45] found service account: "default"
	I1013 22:04:25.118321  487583 default_sa.go:55] duration metric: took 3.046489ms for default service account to be created ...
	I1013 22:04:25.118333  487583 system_pods.go:116] waiting for k8s-apps to be running ...
	I1013 22:04:25.122038  487583 system_pods.go:86] 8 kube-system pods found
	I1013 22:04:25.122073  487583 system_pods.go:89] "coredns-66bc5c9577-sdbk9" [8d36ca24-a3ba-4fbe-a653-00be0927dc3d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 22:04:25.122080  487583 system_pods.go:89] "etcd-auto-200102" [59039ed7-e15a-407e-841e-30a86d7b903f] Running
	I1013 22:04:25.122088  487583 system_pods.go:89] "kindnet-c9psd" [3b7c7e30-9d46-488f-bff2-977f91619b90] Running
	I1013 22:04:25.122093  487583 system_pods.go:89] "kube-apiserver-auto-200102" [1ebbd1d6-405c-44e8-a237-feb0705cc530] Running
	I1013 22:04:25.122098  487583 system_pods.go:89] "kube-controller-manager-auto-200102" [78c87a54-7514-4def-8203-3dbcd916a373] Running
	I1013 22:04:25.122104  487583 system_pods.go:89] "kube-proxy-m6qcc" [432fd165-7595-4ef5-b34b-2183518251e0] Running
	I1013 22:04:25.122109  487583 system_pods.go:89] "kube-scheduler-auto-200102" [d6496262-fff5-48e6-821f-32513cda17fc] Running
	I1013 22:04:25.122117  487583 system_pods.go:89] "storage-provisioner" [056b0c4f-6bc9-4461-b3da-18021518efe9] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 22:04:25.122142  487583 retry.go:31] will retry after 273.856133ms: missing components: kube-dns
	I1013 22:04:25.402216  487583 system_pods.go:86] 8 kube-system pods found
	I1013 22:04:25.402254  487583 system_pods.go:89] "coredns-66bc5c9577-sdbk9" [8d36ca24-a3ba-4fbe-a653-00be0927dc3d] Running
	I1013 22:04:25.402263  487583 system_pods.go:89] "etcd-auto-200102" [59039ed7-e15a-407e-841e-30a86d7b903f] Running
	I1013 22:04:25.402269  487583 system_pods.go:89] "kindnet-c9psd" [3b7c7e30-9d46-488f-bff2-977f91619b90] Running
	I1013 22:04:25.402281  487583 system_pods.go:89] "kube-apiserver-auto-200102" [1ebbd1d6-405c-44e8-a237-feb0705cc530] Running
	I1013 22:04:25.402335  487583 system_pods.go:89] "kube-controller-manager-auto-200102" [78c87a54-7514-4def-8203-3dbcd916a373] Running
	I1013 22:04:25.402359  487583 system_pods.go:89] "kube-proxy-m6qcc" [432fd165-7595-4ef5-b34b-2183518251e0] Running
	I1013 22:04:25.402366  487583 system_pods.go:89] "kube-scheduler-auto-200102" [d6496262-fff5-48e6-821f-32513cda17fc] Running
	I1013 22:04:25.402376  487583 system_pods.go:89] "storage-provisioner" [056b0c4f-6bc9-4461-b3da-18021518efe9] Running
	I1013 22:04:25.402385  487583 system_pods.go:126] duration metric: took 284.045366ms to wait for k8s-apps to be running ...
	I1013 22:04:25.402397  487583 system_svc.go:44] waiting for kubelet service to be running ....
	I1013 22:04:25.402464  487583 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 22:04:25.419657  487583 system_svc.go:56] duration metric: took 17.243747ms WaitForService to wait for kubelet
	I1013 22:04:25.419690  487583 kubeadm.go:586] duration metric: took 12.098989753s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 22:04:25.419712  487583 node_conditions.go:102] verifying NodePressure condition ...
	I1013 22:04:25.423101  487583 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1013 22:04:25.423126  487583 node_conditions.go:123] node cpu capacity is 8
	I1013 22:04:25.423142  487583 node_conditions.go:105] duration metric: took 3.423638ms to run NodePressure ...
	I1013 22:04:25.423153  487583 start.go:241] waiting for startup goroutines ...
	I1013 22:04:25.423161  487583 start.go:246] waiting for cluster config update ...
	I1013 22:04:25.423171  487583 start.go:255] writing updated cluster config ...
	I1013 22:04:25.423441  487583 ssh_runner.go:195] Run: rm -f paused
	I1013 22:04:25.427987  487583 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 22:04:25.432645  487583 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-sdbk9" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:04:25.437593  487583 pod_ready.go:94] pod "coredns-66bc5c9577-sdbk9" is "Ready"
	I1013 22:04:25.437619  487583 pod_ready.go:86] duration metric: took 4.938693ms for pod "coredns-66bc5c9577-sdbk9" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:04:25.439782  487583 pod_ready.go:83] waiting for pod "etcd-auto-200102" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:04:25.443878  487583 pod_ready.go:94] pod "etcd-auto-200102" is "Ready"
	I1013 22:04:25.443902  487583 pod_ready.go:86] duration metric: took 4.093991ms for pod "etcd-auto-200102" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:04:25.446617  487583 pod_ready.go:83] waiting for pod "kube-apiserver-auto-200102" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:04:25.451319  487583 pod_ready.go:94] pod "kube-apiserver-auto-200102" is "Ready"
	I1013 22:04:25.451347  487583 pod_ready.go:86] duration metric: took 4.705052ms for pod "kube-apiserver-auto-200102" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:04:25.453421  487583 pod_ready.go:83] waiting for pod "kube-controller-manager-auto-200102" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:04:25.832825  487583 pod_ready.go:94] pod "kube-controller-manager-auto-200102" is "Ready"
	I1013 22:04:25.832855  487583 pod_ready.go:86] duration metric: took 379.41459ms for pod "kube-controller-manager-auto-200102" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:04:26.033261  487583 pod_ready.go:83] waiting for pod "kube-proxy-m6qcc" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:04:26.432688  487583 pod_ready.go:94] pod "kube-proxy-m6qcc" is "Ready"
	I1013 22:04:26.432727  487583 pod_ready.go:86] duration metric: took 399.435877ms for pod "kube-proxy-m6qcc" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:04:26.634447  487583 pod_ready.go:83] waiting for pod "kube-scheduler-auto-200102" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:04:27.034881  487583 pod_ready.go:94] pod "kube-scheduler-auto-200102" is "Ready"
	I1013 22:04:27.034917  487583 pod_ready.go:86] duration metric: took 400.438557ms for pod "kube-scheduler-auto-200102" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:04:27.034934  487583 pod_ready.go:40] duration metric: took 1.606887332s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 22:04:27.100050  487583 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1013 22:04:27.101928  487583 out.go:179] * Done! kubectl is now configured to use "auto-200102" cluster and "default" namespace by default
	I1013 22:04:27.075528  496036 start.go:305] selected driver: docker
	I1013 22:04:27.075556  496036 start.go:925] validating driver "docker" against &{Name:default-k8s-diff-port-505851 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-505851 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 22:04:27.075659  496036 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 22:04:27.076485  496036 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 22:04:27.157122  496036 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-13 22:04:27.144207388 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652166656 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1013 22:04:27.157498  496036 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 22:04:27.157536  496036 cni.go:84] Creating CNI manager for ""
	I1013 22:04:27.157595  496036 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 22:04:27.157655  496036 start.go:349] cluster config:
	{Name:default-k8s-diff-port-505851 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-505851 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 22:04:27.159479  496036 out.go:179] * Starting "default-k8s-diff-port-505851" primary control-plane node in "default-k8s-diff-port-505851" cluster
	I1013 22:04:27.161029  496036 cache.go:123] Beginning downloading kic base image for docker with crio
	I1013 22:04:27.162498  496036 out.go:179] * Pulling base image v0.0.48-1760363564-21724 ...
	I1013 22:04:27.164129  496036 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 22:04:27.164182  496036 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-226873/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1013 22:04:27.164196  496036 cache.go:58] Caching tarball of preloaded images
	I1013 22:04:27.164240  496036 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon
	I1013 22:04:27.164327  496036 preload.go:233] Found /home/jenkins/minikube-integration/21724-226873/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1013 22:04:27.164343  496036 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1013 22:04:27.164477  496036 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/default-k8s-diff-port-505851/config.json ...
	I1013 22:04:27.192103  496036 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon, skipping pull
	I1013 22:04:27.192129  496036 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 exists in daemon, skipping load
	I1013 22:04:27.192151  496036 cache.go:232] Successfully downloaded all kic artifacts
	I1013 22:04:27.192180  496036 start.go:360] acquireMachinesLock for default-k8s-diff-port-505851: {Name:mkaf957bc5ced7f5c930a2e33ff0ee7c156af144 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 22:04:27.192250  496036 start.go:364] duration metric: took 48.731µs to acquireMachinesLock for "default-k8s-diff-port-505851"
	I1013 22:04:27.192269  496036 start.go:96] Skipping create...Using existing machine configuration
	I1013 22:04:27.192275  496036 fix.go:54] fixHost starting: 
	I1013 22:04:27.192558  496036 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-505851 --format={{.State.Status}}
	I1013 22:04:27.214564  496036 fix.go:112] recreateIfNeeded on default-k8s-diff-port-505851: state=Stopped err=<nil>
	W1013 22:04:27.214600  496036 fix.go:138] unexpected machine state, will restart: <nil>
	I1013 22:04:26.881029  494020 addons.go:514] duration metric: took 1.97979899s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1013 22:04:27.366173  494020 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1013 22:04:27.371471  494020 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1013 22:04:27.371500  494020 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1013 22:04:27.866391  494020 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1013 22:04:27.871337  494020 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1013 22:04:27.872820  494020 api_server.go:141] control plane version: v1.34.1
	I1013 22:04:27.872853  494020 api_server.go:131] duration metric: took 1.007313107s to wait for apiserver health ...
	I1013 22:04:27.872865  494020 system_pods.go:43] waiting for kube-system pods to appear ...
	I1013 22:04:27.877852  494020 system_pods.go:59] 8 kube-system pods found
	I1013 22:04:27.877910  494020 system_pods.go:61] "coredns-66bc5c9577-br2pb" [531000de-cace-4ffd-ae65-51208d0783c5] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1013 22:04:27.877930  494020 system_pods.go:61] "etcd-newest-cni-843554" [b1660b76-27be-45d2-89da-274c5320b389] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1013 22:04:27.877954  494020 system_pods.go:61] "kindnet-x9k2d" [dda7fe66-1403-4701-b8af-1b3502336d9d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1013 22:04:27.877964  494020 system_pods.go:61] "kube-apiserver-newest-cni-843554" [0c7381fb-b918-4708-afa7-ad537bf1c3d0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1013 22:04:27.877973  494020 system_pods.go:61] "kube-controller-manager-newest-cni-843554" [b31669bc-26b0-45c5-aae7-2e7132dcfe60] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1013 22:04:27.877982  494020 system_pods.go:61] "kube-proxy-zgkgm" [f6dbeddd-feee-4c6f-a51b-add1412128a2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1013 22:04:27.878003  494020 system_pods.go:61] "kube-scheduler-newest-cni-843554" [dea0fcee-f00b-4190-a4fa-4bc097a9f7d0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1013 22:04:27.878016  494020 system_pods.go:61] "storage-provisioner" [2f536c11-96c7-4cb7-8128-eb53b3d44ce8] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1013 22:04:27.878025  494020 system_pods.go:74] duration metric: took 5.151561ms to wait for pod list to return data ...
	I1013 22:04:27.878039  494020 default_sa.go:34] waiting for default service account to be created ...
	I1013 22:04:27.883229  494020 default_sa.go:45] found service account: "default"
	I1013 22:04:27.883279  494020 default_sa.go:55] duration metric: took 5.230572ms for default service account to be created ...
	I1013 22:04:27.883296  494020 kubeadm.go:586] duration metric: took 2.982070176s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1013 22:04:27.883338  494020 node_conditions.go:102] verifying NodePressure condition ...
	I1013 22:04:27.886660  494020 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1013 22:04:27.886696  494020 node_conditions.go:123] node cpu capacity is 8
	I1013 22:04:27.886715  494020 node_conditions.go:105] duration metric: took 3.364375ms to run NodePressure ...
	I1013 22:04:27.886731  494020 start.go:241] waiting for startup goroutines ...
	I1013 22:04:27.886741  494020 start.go:246] waiting for cluster config update ...
	I1013 22:04:27.886757  494020 start.go:255] writing updated cluster config ...
	I1013 22:04:27.887108  494020 ssh_runner.go:195] Run: rm -f paused
	I1013 22:04:27.947927  494020 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1013 22:04:27.951086  494020 out.go:179] * Done! kubectl is now configured to use "newest-cni-843554" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 13 22:04:27 newest-cni-843554 crio[526]: time="2025-10-13T22:04:27.490113949Z" level=info msg="Running pod sandbox: kube-system/kindnet-x9k2d/POD" id=e427b562-7e03-46b4-ae08-10a1ee2bdf66 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 13 22:04:27 newest-cni-843554 crio[526]: time="2025-10-13T22:04:27.490214887Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:04:27 newest-cni-843554 crio[526]: time="2025-10-13T22:04:27.49344124Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 13 22:04:27 newest-cni-843554 crio[526]: time="2025-10-13T22:04:27.494377364Z" level=info msg="Ran pod sandbox 5069c1438b700ca32d4b8e9127b4885a9fe4e53213f2aca4d6ac4a6fe0935765 with infra container: kube-system/kube-proxy-zgkgm/POD" id=550c57b5-94f2-4c38-bb41-a6ecb98fb60c name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 13 22:04:27 newest-cni-843554 crio[526]: time="2025-10-13T22:04:27.494716782Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=e427b562-7e03-46b4-ae08-10a1ee2bdf66 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 13 22:04:27 newest-cni-843554 crio[526]: time="2025-10-13T22:04:27.496824486Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 13 22:04:27 newest-cni-843554 crio[526]: time="2025-10-13T22:04:27.497778758Z" level=info msg="Ran pod sandbox 7ce52899556293fa1e49cc1ec9069d3e66889cb8024620ffa3c65c843bdb15a0 with infra container: kube-system/kindnet-x9k2d/POD" id=e427b562-7e03-46b4-ae08-10a1ee2bdf66 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 13 22:04:27 newest-cni-843554 crio[526]: time="2025-10-13T22:04:27.502183491Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=da1095bf-a8ad-4dfe-b1c5-0d8526ce144e name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:04:27 newest-cni-843554 crio[526]: time="2025-10-13T22:04:27.502229207Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=45cf3cec-c6f1-4101-bd96-cc543a4b0474 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:04:27 newest-cni-843554 crio[526]: time="2025-10-13T22:04:27.50407325Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=f6fed489-6867-457c-88f4-6c6844daadb0 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:04:27 newest-cni-843554 crio[526]: time="2025-10-13T22:04:27.504793629Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=b69569cd-0b63-4601-a894-43019bdbda6a name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:04:27 newest-cni-843554 crio[526]: time="2025-10-13T22:04:27.506209485Z" level=info msg="Creating container: kube-system/kindnet-x9k2d/kindnet-cni" id=a22957b2-dbfa-4667-a738-0344319ba7a3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:04:27 newest-cni-843554 crio[526]: time="2025-10-13T22:04:27.506416762Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:04:27 newest-cni-843554 crio[526]: time="2025-10-13T22:04:27.507256757Z" level=info msg="Creating container: kube-system/kube-proxy-zgkgm/kube-proxy" id=87915dbc-7230-42f2-bc81-4329d236d569 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:04:27 newest-cni-843554 crio[526]: time="2025-10-13T22:04:27.509548927Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:04:27 newest-cni-843554 crio[526]: time="2025-10-13T22:04:27.512562185Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:04:27 newest-cni-843554 crio[526]: time="2025-10-13T22:04:27.513210562Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:04:27 newest-cni-843554 crio[526]: time="2025-10-13T22:04:27.516856371Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:04:27 newest-cni-843554 crio[526]: time="2025-10-13T22:04:27.517672063Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:04:27 newest-cni-843554 crio[526]: time="2025-10-13T22:04:27.544564647Z" level=info msg="Created container 6f0c64a9efcd8758124efb0152cbec061434e054bea5bbeb08cc3dad76c5e6c3: kube-system/kindnet-x9k2d/kindnet-cni" id=a22957b2-dbfa-4667-a738-0344319ba7a3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:04:27 newest-cni-843554 crio[526]: time="2025-10-13T22:04:27.545314192Z" level=info msg="Starting container: 6f0c64a9efcd8758124efb0152cbec061434e054bea5bbeb08cc3dad76c5e6c3" id=4b346e07-8cc2-45c3-a4f2-a4e5d311e248 name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 22:04:27 newest-cni-843554 crio[526]: time="2025-10-13T22:04:27.548029017Z" level=info msg="Started container" PID=1052 containerID=6f0c64a9efcd8758124efb0152cbec061434e054bea5bbeb08cc3dad76c5e6c3 description=kube-system/kindnet-x9k2d/kindnet-cni id=4b346e07-8cc2-45c3-a4f2-a4e5d311e248 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7ce52899556293fa1e49cc1ec9069d3e66889cb8024620ffa3c65c843bdb15a0
	Oct 13 22:04:27 newest-cni-843554 crio[526]: time="2025-10-13T22:04:27.551088674Z" level=info msg="Created container 06775b1276b3a84da4ec0e52de07dbe6eae776c5ea15d92e0dd6494cbf3f6044: kube-system/kube-proxy-zgkgm/kube-proxy" id=87915dbc-7230-42f2-bc81-4329d236d569 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:04:27 newest-cni-843554 crio[526]: time="2025-10-13T22:04:27.551845209Z" level=info msg="Starting container: 06775b1276b3a84da4ec0e52de07dbe6eae776c5ea15d92e0dd6494cbf3f6044" id=8bf965a9-8866-41c3-a302-af8fce749b5a name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 22:04:27 newest-cni-843554 crio[526]: time="2025-10-13T22:04:27.555576102Z" level=info msg="Started container" PID=1053 containerID=06775b1276b3a84da4ec0e52de07dbe6eae776c5ea15d92e0dd6494cbf3f6044 description=kube-system/kube-proxy-zgkgm/kube-proxy id=8bf965a9-8866-41c3-a302-af8fce749b5a name=/runtime.v1.RuntimeService/StartContainer sandboxID=5069c1438b700ca32d4b8e9127b4885a9fe4e53213f2aca4d6ac4a6fe0935765
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	06775b1276b3a       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   4 seconds ago       Running             kube-proxy                1                   5069c1438b700       kube-proxy-zgkgm                            kube-system
	6f0c64a9efcd8       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   4 seconds ago       Running             kindnet-cni               1                   7ce5289955629       kindnet-x9k2d                               kube-system
	327444d3eb25b       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   7 seconds ago       Running             kube-controller-manager   1                   749fcb4b90b08       kube-controller-manager-newest-cni-843554   kube-system
	0b277c65787c4       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   7 seconds ago       Running             kube-apiserver            1                   8a4a07dd64e45       kube-apiserver-newest-cni-843554            kube-system
	1f033e42eeaf3       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   7 seconds ago       Running             etcd                      1                   020e5d3e93776       etcd-newest-cni-843554                      kube-system
	be70fd932072f       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   7 seconds ago       Running             kube-scheduler            1                   4fd026e3ce246       kube-scheduler-newest-cni-843554            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-843554
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-843554
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=18273cc699fc357fbf4f93654efe4966698a9f22
	                    minikube.k8s.io/name=newest-cni-843554
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_13T22_04_01_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 13 Oct 2025 22:03:58 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-843554
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 13 Oct 2025 22:04:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 13 Oct 2025 22:04:26 +0000   Mon, 13 Oct 2025 22:03:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 13 Oct 2025 22:04:26 +0000   Mon, 13 Oct 2025 22:03:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 13 Oct 2025 22:04:26 +0000   Mon, 13 Oct 2025 22:03:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Mon, 13 Oct 2025 22:04:26 +0000   Mon, 13 Oct 2025 22:03:55 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    newest-cni-843554
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863444Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863444Ki
	  pods:               110
	System Info:
	  Machine ID:                 66f4c9907daa2bafbf78246f68ed04ba
	  System UUID:                251c90b5-21d9-4e58-8666-2d86d8084a26
	  Boot ID:                    981b04c4-08c1-4321-af44-0fa889f5f1d8
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-843554                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         32s
	  kube-system                 kindnet-x9k2d                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-newest-cni-843554             250m (3%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-newest-cni-843554    200m (2%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-zgkgm                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-newest-cni-843554             100m (1%)     0 (0%)      0 (0%)           0 (0%)         32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 25s                kube-proxy       
	  Normal  Starting                 4s                 kube-proxy       
	  Normal  Starting                 38s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  37s (x8 over 38s)  kubelet          Node newest-cni-843554 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    37s (x8 over 38s)  kubelet          Node newest-cni-843554 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     37s (x8 over 38s)  kubelet          Node newest-cni-843554 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    32s                kubelet          Node newest-cni-843554 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  32s                kubelet          Node newest-cni-843554 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     32s                kubelet          Node newest-cni-843554 status is now: NodeHasSufficientPID
	  Normal  Starting                 32s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           28s                node-controller  Node newest-cni-843554 event: Registered Node newest-cni-843554 in Controller
	  Normal  Starting                 8s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8s (x8 over 8s)    kubelet          Node newest-cni-843554 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8s (x8 over 8s)    kubelet          Node newest-cni-843554 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8s (x8 over 8s)    kubelet          Node newest-cni-843554 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3s                 node-controller  Node newest-cni-843554 event: Registered Node newest-cni-843554 in Controller
	
	
	==> dmesg <==
	[  +0.099627] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.027042] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.612816] kauditd_printk_skb: 47 callbacks suppressed
	[Oct13 21:21] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +1.026013] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +1.023870] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +1.023931] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +1.023844] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +1.023883] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +2.047734] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +4.031606] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +8.511203] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[ +16.382178] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[Oct13 21:22] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	
	
	==> etcd [1f033e42eeaf346d9e7d5c0daacca0dc86df0814805b2e603582086f4bf618cb] <==
	{"level":"warn","ts":"2025-10-13T22:04:25.630635Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:04:25.641150Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:04:25.648940Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:04:25.656175Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:04:25.662930Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:04:25.669523Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:04:25.676571Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:04:25.691030Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:04:25.695707Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:04:25.702785Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:04:25.710452Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:04:25.716902Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:04:25.723115Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:04:25.729294Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:04:25.736529Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:04:25.743512Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:04:25.749808Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:04:25.756374Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:04:25.763118Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:04:25.770409Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:04:25.777358Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:04:25.790293Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:04:25.797046Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:04:25.803591Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:04:25.861139Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54260","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 22:04:32 up  1:46,  0 user,  load average: 4.00, 3.59, 5.79
	Linux newest-cni-843554 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [6f0c64a9efcd8758124efb0152cbec061434e054bea5bbeb08cc3dad76c5e6c3] <==
	I1013 22:04:27.771019       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1013 22:04:27.771292       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1013 22:04:27.771423       1 main.go:148] setting mtu 1500 for CNI 
	I1013 22:04:27.771444       1 main.go:178] kindnetd IP family: "ipv4"
	I1013 22:04:27.771475       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-13T22:04:28Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1013 22:04:28.070804       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1013 22:04:28.071048       1 controller.go:381] "Waiting for informer caches to sync"
	I1013 22:04:28.071069       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1013 22:04:28.071256       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1013 22:04:28.471304       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1013 22:04:28.471339       1 metrics.go:72] Registering metrics
	I1013 22:04:28.472725       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [0b277c65787c450076a87589a9056f8e503435026df7a87ef8fcfd1f5fd85717] <==
	I1013 22:04:26.344704       1 autoregister_controller.go:144] Starting autoregister controller
	I1013 22:04:26.344711       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1013 22:04:26.344733       1 cache.go:39] Caches are synced for autoregister controller
	I1013 22:04:26.344799       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1013 22:04:26.344810       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1013 22:04:26.345002       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1013 22:04:26.345043       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1013 22:04:26.345079       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1013 22:04:26.349775       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1013 22:04:26.349941       1 policy_source.go:240] refreshing policies
	I1013 22:04:26.352804       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1013 22:04:26.353896       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1013 22:04:26.378036       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1013 22:04:26.648745       1 controller.go:667] quota admission added evaluator for: namespaces
	I1013 22:04:26.691492       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1013 22:04:26.716379       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1013 22:04:26.724176       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1013 22:04:26.732987       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1013 22:04:26.771822       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.71.79"}
	I1013 22:04:26.784389       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.85.147"}
	I1013 22:04:27.261725       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1013 22:04:29.777889       1 controller.go:667] quota admission added evaluator for: endpoints
	I1013 22:04:30.026564       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1013 22:04:30.026564       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1013 22:04:30.125912       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [327444d3eb25bfb0fe644674fa48e8f1aa6d8b136234fba36590d6457f450dc4] <==
	I1013 22:04:29.684688       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1013 22:04:29.685861       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1013 22:04:29.686148       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1013 22:04:29.686284       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1013 22:04:29.686344       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-843554"
	I1013 22:04:29.686482       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1013 22:04:29.689235       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1013 22:04:29.690531       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1013 22:04:29.690657       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1013 22:04:29.691734       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1013 22:04:29.693330       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1013 22:04:29.694542       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1013 22:04:29.695148       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1013 22:04:29.696877       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1013 22:04:29.700061       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1013 22:04:29.701237       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 22:04:29.701257       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1013 22:04:29.701270       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1013 22:04:29.703412       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1013 22:04:29.703909       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1013 22:04:29.705405       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1013 22:04:29.705712       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1013 22:04:29.707029       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1013 22:04:29.711768       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1013 22:04:29.725960       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [06775b1276b3a84da4ec0e52de07dbe6eae776c5ea15d92e0dd6494cbf3f6044] <==
	I1013 22:04:27.602215       1 server_linux.go:53] "Using iptables proxy"
	I1013 22:04:27.660250       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1013 22:04:27.761114       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1013 22:04:27.761160       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1013 22:04:27.761258       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1013 22:04:27.783613       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1013 22:04:27.783686       1 server_linux.go:132] "Using iptables Proxier"
	I1013 22:04:27.790823       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1013 22:04:27.791248       1 server.go:527] "Version info" version="v1.34.1"
	I1013 22:04:27.792142       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 22:04:27.794090       1 config.go:106] "Starting endpoint slice config controller"
	I1013 22:04:27.794140       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1013 22:04:27.794203       1 config.go:309] "Starting node config controller"
	I1013 22:04:27.794277       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1013 22:04:27.794288       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1013 22:04:27.794406       1 config.go:403] "Starting serviceCIDR config controller"
	I1013 22:04:27.794415       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1013 22:04:27.794111       1 config.go:200] "Starting service config controller"
	I1013 22:04:27.794481       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1013 22:04:27.894335       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1013 22:04:27.895561       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1013 22:04:27.895594       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [be70fd932072f322d31cdee2e908984ed89c0b1dba75f984223cd5fb43d68c52] <==
	I1013 22:04:25.508377       1 serving.go:386] Generated self-signed cert in-memory
	W1013 22:04:26.268479       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1013 22:04:26.268516       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1013 22:04:26.268530       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1013 22:04:26.268537       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1013 22:04:26.316602       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1013 22:04:26.316638       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 22:04:26.319638       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 22:04:26.319678       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 22:04:26.320202       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1013 22:04:26.320297       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1013 22:04:26.420449       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 13 22:04:26 newest-cni-843554 kubelet[674]: E1013 22:04:26.220843     674 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-843554\" not found" node="newest-cni-843554"
	Oct 13 22:04:26 newest-cni-843554 kubelet[674]: E1013 22:04:26.221045     674 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-843554\" not found" node="newest-cni-843554"
	Oct 13 22:04:26 newest-cni-843554 kubelet[674]: E1013 22:04:26.221149     674 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-843554\" not found" node="newest-cni-843554"
	Oct 13 22:04:26 newest-cni-843554 kubelet[674]: I1013 22:04:26.380438     674 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-843554"
	Oct 13 22:04:26 newest-cni-843554 kubelet[674]: I1013 22:04:26.386799     674 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-843554"
	Oct 13 22:04:26 newest-cni-843554 kubelet[674]: I1013 22:04:26.386881     674 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-843554"
	Oct 13 22:04:26 newest-cni-843554 kubelet[674]: I1013 22:04:26.386910     674 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 13 22:04:26 newest-cni-843554 kubelet[674]: I1013 22:04:26.387897     674 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 13 22:04:26 newest-cni-843554 kubelet[674]: E1013 22:04:26.393741     674 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-843554\" already exists" pod="kube-system/kube-apiserver-newest-cni-843554"
	Oct 13 22:04:26 newest-cni-843554 kubelet[674]: I1013 22:04:26.393774     674 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-843554"
	Oct 13 22:04:26 newest-cni-843554 kubelet[674]: E1013 22:04:26.400836     674 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-843554\" already exists" pod="kube-system/kube-controller-manager-newest-cni-843554"
	Oct 13 22:04:26 newest-cni-843554 kubelet[674]: I1013 22:04:26.400871     674 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-843554"
	Oct 13 22:04:26 newest-cni-843554 kubelet[674]: E1013 22:04:26.407277     674 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-843554\" already exists" pod="kube-system/kube-scheduler-newest-cni-843554"
	Oct 13 22:04:26 newest-cni-843554 kubelet[674]: I1013 22:04:26.407314     674 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-843554"
	Oct 13 22:04:26 newest-cni-843554 kubelet[674]: E1013 22:04:26.413406     674 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-843554\" already exists" pod="kube-system/etcd-newest-cni-843554"
	Oct 13 22:04:27 newest-cni-843554 kubelet[674]: I1013 22:04:27.177271     674 apiserver.go:52] "Watching apiserver"
	Oct 13 22:04:27 newest-cni-843554 kubelet[674]: I1013 22:04:27.278675     674 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 13 22:04:27 newest-cni-843554 kubelet[674]: I1013 22:04:27.292486     674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dda7fe66-1403-4701-b8af-1b3502336d9d-lib-modules\") pod \"kindnet-x9k2d\" (UID: \"dda7fe66-1403-4701-b8af-1b3502336d9d\") " pod="kube-system/kindnet-x9k2d"
	Oct 13 22:04:27 newest-cni-843554 kubelet[674]: I1013 22:04:27.292571     674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/dda7fe66-1403-4701-b8af-1b3502336d9d-cni-cfg\") pod \"kindnet-x9k2d\" (UID: \"dda7fe66-1403-4701-b8af-1b3502336d9d\") " pod="kube-system/kindnet-x9k2d"
	Oct 13 22:04:27 newest-cni-843554 kubelet[674]: I1013 22:04:27.292780     674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dda7fe66-1403-4701-b8af-1b3502336d9d-xtables-lock\") pod \"kindnet-x9k2d\" (UID: \"dda7fe66-1403-4701-b8af-1b3502336d9d\") " pod="kube-system/kindnet-x9k2d"
	Oct 13 22:04:27 newest-cni-843554 kubelet[674]: I1013 22:04:27.292870     674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f6dbeddd-feee-4c6f-a51b-add1412128a2-xtables-lock\") pod \"kube-proxy-zgkgm\" (UID: \"f6dbeddd-feee-4c6f-a51b-add1412128a2\") " pod="kube-system/kube-proxy-zgkgm"
	Oct 13 22:04:27 newest-cni-843554 kubelet[674]: I1013 22:04:27.292899     674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f6dbeddd-feee-4c6f-a51b-add1412128a2-lib-modules\") pod \"kube-proxy-zgkgm\" (UID: \"f6dbeddd-feee-4c6f-a51b-add1412128a2\") " pod="kube-system/kube-proxy-zgkgm"
	Oct 13 22:04:29 newest-cni-843554 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 13 22:04:29 newest-cni-843554 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 13 22:04:29 newest-cni-843554 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-843554 -n newest-cni-843554
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-843554 -n newest-cni-843554: exit status 2 (363.03814ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-843554 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-br2pb storage-provisioner dashboard-metrics-scraper-6ffb444bf9-wdvfg kubernetes-dashboard-855c9754f9-bxxrn
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-843554 describe pod coredns-66bc5c9577-br2pb storage-provisioner dashboard-metrics-scraper-6ffb444bf9-wdvfg kubernetes-dashboard-855c9754f9-bxxrn
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-843554 describe pod coredns-66bc5c9577-br2pb storage-provisioner dashboard-metrics-scraper-6ffb444bf9-wdvfg kubernetes-dashboard-855c9754f9-bxxrn: exit status 1 (82.765106ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-br2pb" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-wdvfg" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-bxxrn" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-843554 describe pod coredns-66bc5c9577-br2pb storage-provisioner dashboard-metrics-scraper-6ffb444bf9-wdvfg kubernetes-dashboard-855c9754f9-bxxrn: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-843554
helpers_test.go:243: (dbg) docker inspect newest-cni-843554:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d26d618d283e713767743456467a02e98327a589917c92afd56b523d8910224c",
	        "Created": "2025-10-13T22:03:44.63390679Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 494216,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-13T22:04:17.93986727Z",
	            "FinishedAt": "2025-10-13T22:04:17.097932073Z"
	        },
	        "Image": "sha256:496f866fde942b6f85dc3d23428f27ed11a5cd69b522133c43b8dc97e7575c9e",
	        "ResolvConfPath": "/var/lib/docker/containers/d26d618d283e713767743456467a02e98327a589917c92afd56b523d8910224c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d26d618d283e713767743456467a02e98327a589917c92afd56b523d8910224c/hostname",
	        "HostsPath": "/var/lib/docker/containers/d26d618d283e713767743456467a02e98327a589917c92afd56b523d8910224c/hosts",
	        "LogPath": "/var/lib/docker/containers/d26d618d283e713767743456467a02e98327a589917c92afd56b523d8910224c/d26d618d283e713767743456467a02e98327a589917c92afd56b523d8910224c-json.log",
	        "Name": "/newest-cni-843554",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-843554:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-843554",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d26d618d283e713767743456467a02e98327a589917c92afd56b523d8910224c",
	                "LowerDir": "/var/lib/docker/overlay2/8117176ea132b2feb044432a5a52afef1a59a8eaae543faf8b6d4ada5437690c-init/diff:/var/lib/docker/overlay2/d6236b573fee274727d414fe7dfb0718c2f1a4b8ebed995b4196b3231a8d31a4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8117176ea132b2feb044432a5a52afef1a59a8eaae543faf8b6d4ada5437690c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8117176ea132b2feb044432a5a52afef1a59a8eaae543faf8b6d4ada5437690c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8117176ea132b2feb044432a5a52afef1a59a8eaae543faf8b6d4ada5437690c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-843554",
	                "Source": "/var/lib/docker/volumes/newest-cni-843554/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-843554",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-843554",
	                "name.minikube.sigs.k8s.io": "newest-cni-843554",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7ec7b822511faa57a71897b93c380f1fb68bfa8e59609bb619a8c8ea373e267d",
	            "SandboxKey": "/var/run/docker/netns/7ec7b822511f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-843554": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "de:df:6b:5b:8f:31",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "57a00d8bcb8b486fb836fa8e6ea8fe1361ab235dd6af3b3af1489d461e67a488",
	                    "EndpointID": "3b2eb541eff9553283498f0444a9fc758047590c6f08dc887a647976ffe3d3ac",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-843554",
	                        "d26d618d283e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-843554 -n newest-cni-843554
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-843554 -n newest-cni-843554: exit status 2 (348.440044ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-843554 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-843554 logs -n 25: (1.030272667s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p embed-certs-521669 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-521669           │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │ 13 Oct 25 22:04 UTC │
	│ delete  │ -p kubernetes-upgrade-050146                                                                                                                                                                                                                  │ kubernetes-upgrade-050146    │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │ 13 Oct 25 22:03 UTC │
	│ delete  │ -p disable-driver-mounts-659143                                                                                                                                                                                                               │ disable-driver-mounts-659143 │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │ 13 Oct 25 22:03 UTC │
	│ start   │ -p default-k8s-diff-port-505851 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-505851 │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │ 13 Oct 25 22:03 UTC │
	│ image   │ no-preload-080337 image list --format=json                                                                                                                                                                                                    │ no-preload-080337            │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │ 13 Oct 25 22:03 UTC │
	│ pause   │ -p no-preload-080337 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-080337            │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │                     │
	│ delete  │ -p no-preload-080337                                                                                                                                                                                                                          │ no-preload-080337            │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │ 13 Oct 25 22:03 UTC │
	│ start   │ -p cert-expiration-894101 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-894101       │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │ 13 Oct 25 22:03 UTC │
	│ delete  │ -p no-preload-080337                                                                                                                                                                                                                          │ no-preload-080337            │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │ 13 Oct 25 22:03 UTC │
	│ start   │ -p newest-cni-843554 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-843554            │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │ 13 Oct 25 22:04 UTC │
	│ delete  │ -p cert-expiration-894101                                                                                                                                                                                                                     │ cert-expiration-894101       │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │ 13 Oct 25 22:03 UTC │
	│ start   │ -p auto-200102 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-200102                  │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │ 13 Oct 25 22:04 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-505851 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-505851 │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-843554 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-843554            │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-505851 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-505851 │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │ 13 Oct 25 22:04 UTC │
	│ stop    │ -p newest-cni-843554 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-843554            │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │ 13 Oct 25 22:04 UTC │
	│ addons  │ enable dashboard -p newest-cni-843554 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-843554            │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │ 13 Oct 25 22:04 UTC │
	│ start   │ -p newest-cni-843554 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-843554            │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │ 13 Oct 25 22:04 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-505851 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-505851 │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │ 13 Oct 25 22:04 UTC │
	│ start   │ -p default-k8s-diff-port-505851 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-505851 │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │                     │
	│ ssh     │ -p auto-200102 pgrep -a kubelet                                                                                                                                                                                                               │ auto-200102                  │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │ 13 Oct 25 22:04 UTC │
	│ image   │ newest-cni-843554 image list --format=json                                                                                                                                                                                                    │ newest-cni-843554            │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │ 13 Oct 25 22:04 UTC │
	│ pause   │ -p newest-cni-843554 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-843554            │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-521669 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-521669           │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │                     │
	│ stop    │ -p embed-certs-521669 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-521669           │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 22:04:26
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 22:04:26.956859  496036 out.go:360] Setting OutFile to fd 1 ...
	I1013 22:04:26.957185  496036 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:04:26.957196  496036 out.go:374] Setting ErrFile to fd 2...
	I1013 22:04:26.957200  496036 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:04:26.957426  496036 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-226873/.minikube/bin
	I1013 22:04:26.958103  496036 out.go:368] Setting JSON to false
	I1013 22:04:26.959552  496036 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6415,"bootTime":1760386652,"procs":337,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1013 22:04:26.959670  496036 start.go:141] virtualization: kvm guest
	I1013 22:04:26.961750  496036 out.go:179] * [default-k8s-diff-port-505851] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1013 22:04:26.963349  496036 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 22:04:26.963411  496036 notify.go:220] Checking for updates...
	I1013 22:04:26.965760  496036 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 22:04:26.967036  496036 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-226873/kubeconfig
	I1013 22:04:26.968402  496036 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-226873/.minikube
	I1013 22:04:26.969929  496036 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1013 22:04:26.971386  496036 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 22:04:26.973194  496036 config.go:182] Loaded profile config "default-k8s-diff-port-505851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:04:26.973733  496036 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 22:04:26.999446  496036 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1013 22:04:26.999551  496036 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 22:04:27.071876  496036 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-13 22:04:27.059169849 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652166656 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1013 22:04:27.072057  496036 docker.go:318] overlay module found
	I1013 22:04:27.073848  496036 out.go:179] * Using the docker driver based on existing profile
	W1013 22:04:24.585424  487583 node_ready.go:57] node "auto-200102" has "Ready":"False" status (will retry)
	I1013 22:04:25.084904  487583 node_ready.go:49] node "auto-200102" is "Ready"
	I1013 22:04:25.084967  487583 node_ready.go:38] duration metric: took 11.503181949s for node "auto-200102" to be "Ready" ...
	I1013 22:04:25.085065  487583 api_server.go:52] waiting for apiserver process to appear ...
	I1013 22:04:25.085136  487583 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 22:04:25.100833  487583 api_server.go:72] duration metric: took 11.780125697s to wait for apiserver process to appear ...
	I1013 22:04:25.100936  487583 api_server.go:88] waiting for apiserver healthz status ...
	I1013 22:04:25.100960  487583 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1013 22:04:25.108756  487583 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1013 22:04:25.110322  487583 api_server.go:141] control plane version: v1.34.1
	I1013 22:04:25.110346  487583 api_server.go:131] duration metric: took 9.402756ms to wait for apiserver health ...
	I1013 22:04:25.110356  487583 system_pods.go:43] waiting for kube-system pods to appear ...
	I1013 22:04:25.115120  487583 system_pods.go:59] 8 kube-system pods found
	I1013 22:04:25.115171  487583 system_pods.go:61] "coredns-66bc5c9577-sdbk9" [8d36ca24-a3ba-4fbe-a653-00be0927dc3d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 22:04:25.115184  487583 system_pods.go:61] "etcd-auto-200102" [59039ed7-e15a-407e-841e-30a86d7b903f] Running
	I1013 22:04:25.115199  487583 system_pods.go:61] "kindnet-c9psd" [3b7c7e30-9d46-488f-bff2-977f91619b90] Running
	I1013 22:04:25.115205  487583 system_pods.go:61] "kube-apiserver-auto-200102" [1ebbd1d6-405c-44e8-a237-feb0705cc530] Running
	I1013 22:04:25.115211  487583 system_pods.go:61] "kube-controller-manager-auto-200102" [78c87a54-7514-4def-8203-3dbcd916a373] Running
	I1013 22:04:25.115222  487583 system_pods.go:61] "kube-proxy-m6qcc" [432fd165-7595-4ef5-b34b-2183518251e0] Running
	I1013 22:04:25.115227  487583 system_pods.go:61] "kube-scheduler-auto-200102" [d6496262-fff5-48e6-821f-32513cda17fc] Running
	I1013 22:04:25.115237  487583 system_pods.go:61] "storage-provisioner" [056b0c4f-6bc9-4461-b3da-18021518efe9] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 22:04:25.115246  487583 system_pods.go:74] duration metric: took 4.883205ms to wait for pod list to return data ...
	I1013 22:04:25.115261  487583 default_sa.go:34] waiting for default service account to be created ...
	I1013 22:04:25.118286  487583 default_sa.go:45] found service account: "default"
	I1013 22:04:25.118321  487583 default_sa.go:55] duration metric: took 3.046489ms for default service account to be created ...
	I1013 22:04:25.118333  487583 system_pods.go:116] waiting for k8s-apps to be running ...
	I1013 22:04:25.122038  487583 system_pods.go:86] 8 kube-system pods found
	I1013 22:04:25.122073  487583 system_pods.go:89] "coredns-66bc5c9577-sdbk9" [8d36ca24-a3ba-4fbe-a653-00be0927dc3d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 22:04:25.122080  487583 system_pods.go:89] "etcd-auto-200102" [59039ed7-e15a-407e-841e-30a86d7b903f] Running
	I1013 22:04:25.122088  487583 system_pods.go:89] "kindnet-c9psd" [3b7c7e30-9d46-488f-bff2-977f91619b90] Running
	I1013 22:04:25.122093  487583 system_pods.go:89] "kube-apiserver-auto-200102" [1ebbd1d6-405c-44e8-a237-feb0705cc530] Running
	I1013 22:04:25.122098  487583 system_pods.go:89] "kube-controller-manager-auto-200102" [78c87a54-7514-4def-8203-3dbcd916a373] Running
	I1013 22:04:25.122104  487583 system_pods.go:89] "kube-proxy-m6qcc" [432fd165-7595-4ef5-b34b-2183518251e0] Running
	I1013 22:04:25.122109  487583 system_pods.go:89] "kube-scheduler-auto-200102" [d6496262-fff5-48e6-821f-32513cda17fc] Running
	I1013 22:04:25.122117  487583 system_pods.go:89] "storage-provisioner" [056b0c4f-6bc9-4461-b3da-18021518efe9] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 22:04:25.122142  487583 retry.go:31] will retry after 273.856133ms: missing components: kube-dns
	I1013 22:04:25.402216  487583 system_pods.go:86] 8 kube-system pods found
	I1013 22:04:25.402254  487583 system_pods.go:89] "coredns-66bc5c9577-sdbk9" [8d36ca24-a3ba-4fbe-a653-00be0927dc3d] Running
	I1013 22:04:25.402263  487583 system_pods.go:89] "etcd-auto-200102" [59039ed7-e15a-407e-841e-30a86d7b903f] Running
	I1013 22:04:25.402269  487583 system_pods.go:89] "kindnet-c9psd" [3b7c7e30-9d46-488f-bff2-977f91619b90] Running
	I1013 22:04:25.402281  487583 system_pods.go:89] "kube-apiserver-auto-200102" [1ebbd1d6-405c-44e8-a237-feb0705cc530] Running
	I1013 22:04:25.402335  487583 system_pods.go:89] "kube-controller-manager-auto-200102" [78c87a54-7514-4def-8203-3dbcd916a373] Running
	I1013 22:04:25.402359  487583 system_pods.go:89] "kube-proxy-m6qcc" [432fd165-7595-4ef5-b34b-2183518251e0] Running
	I1013 22:04:25.402366  487583 system_pods.go:89] "kube-scheduler-auto-200102" [d6496262-fff5-48e6-821f-32513cda17fc] Running
	I1013 22:04:25.402376  487583 system_pods.go:89] "storage-provisioner" [056b0c4f-6bc9-4461-b3da-18021518efe9] Running
	I1013 22:04:25.402385  487583 system_pods.go:126] duration metric: took 284.045366ms to wait for k8s-apps to be running ...
	I1013 22:04:25.402397  487583 system_svc.go:44] waiting for kubelet service to be running ....
	I1013 22:04:25.402464  487583 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 22:04:25.419657  487583 system_svc.go:56] duration metric: took 17.243747ms WaitForService to wait for kubelet
	I1013 22:04:25.419690  487583 kubeadm.go:586] duration metric: took 12.098989753s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 22:04:25.419712  487583 node_conditions.go:102] verifying NodePressure condition ...
	I1013 22:04:25.423101  487583 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1013 22:04:25.423126  487583 node_conditions.go:123] node cpu capacity is 8
	I1013 22:04:25.423142  487583 node_conditions.go:105] duration metric: took 3.423638ms to run NodePressure ...
	I1013 22:04:25.423153  487583 start.go:241] waiting for startup goroutines ...
	I1013 22:04:25.423161  487583 start.go:246] waiting for cluster config update ...
	I1013 22:04:25.423171  487583 start.go:255] writing updated cluster config ...
	I1013 22:04:25.423441  487583 ssh_runner.go:195] Run: rm -f paused
	I1013 22:04:25.427987  487583 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 22:04:25.432645  487583 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-sdbk9" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:04:25.437593  487583 pod_ready.go:94] pod "coredns-66bc5c9577-sdbk9" is "Ready"
	I1013 22:04:25.437619  487583 pod_ready.go:86] duration metric: took 4.938693ms for pod "coredns-66bc5c9577-sdbk9" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:04:25.439782  487583 pod_ready.go:83] waiting for pod "etcd-auto-200102" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:04:25.443878  487583 pod_ready.go:94] pod "etcd-auto-200102" is "Ready"
	I1013 22:04:25.443902  487583 pod_ready.go:86] duration metric: took 4.093991ms for pod "etcd-auto-200102" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:04:25.446617  487583 pod_ready.go:83] waiting for pod "kube-apiserver-auto-200102" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:04:25.451319  487583 pod_ready.go:94] pod "kube-apiserver-auto-200102" is "Ready"
	I1013 22:04:25.451347  487583 pod_ready.go:86] duration metric: took 4.705052ms for pod "kube-apiserver-auto-200102" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:04:25.453421  487583 pod_ready.go:83] waiting for pod "kube-controller-manager-auto-200102" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:04:25.832825  487583 pod_ready.go:94] pod "kube-controller-manager-auto-200102" is "Ready"
	I1013 22:04:25.832855  487583 pod_ready.go:86] duration metric: took 379.41459ms for pod "kube-controller-manager-auto-200102" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:04:26.033261  487583 pod_ready.go:83] waiting for pod "kube-proxy-m6qcc" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:04:26.432688  487583 pod_ready.go:94] pod "kube-proxy-m6qcc" is "Ready"
	I1013 22:04:26.432727  487583 pod_ready.go:86] duration metric: took 399.435877ms for pod "kube-proxy-m6qcc" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:04:26.634447  487583 pod_ready.go:83] waiting for pod "kube-scheduler-auto-200102" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:04:27.034881  487583 pod_ready.go:94] pod "kube-scheduler-auto-200102" is "Ready"
	I1013 22:04:27.034917  487583 pod_ready.go:86] duration metric: took 400.438557ms for pod "kube-scheduler-auto-200102" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:04:27.034934  487583 pod_ready.go:40] duration metric: took 1.606887332s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 22:04:27.100050  487583 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1013 22:04:27.101928  487583 out.go:179] * Done! kubectl is now configured to use "auto-200102" cluster and "default" namespace by default
	I1013 22:04:27.075528  496036 start.go:305] selected driver: docker
	I1013 22:04:27.075556  496036 start.go:925] validating driver "docker" against &{Name:default-k8s-diff-port-505851 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-505851 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 22:04:27.075659  496036 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 22:04:27.076485  496036 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 22:04:27.157122  496036 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-13 22:04:27.144207388 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652166656 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1013 22:04:27.157498  496036 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 22:04:27.157536  496036 cni.go:84] Creating CNI manager for ""
	I1013 22:04:27.157595  496036 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 22:04:27.157655  496036 start.go:349] cluster config:
	{Name:default-k8s-diff-port-505851 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-505851 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 22:04:27.159479  496036 out.go:179] * Starting "default-k8s-diff-port-505851" primary control-plane node in "default-k8s-diff-port-505851" cluster
	I1013 22:04:27.161029  496036 cache.go:123] Beginning downloading kic base image for docker with crio
	I1013 22:04:27.162498  496036 out.go:179] * Pulling base image v0.0.48-1760363564-21724 ...
	I1013 22:04:27.164129  496036 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 22:04:27.164182  496036 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-226873/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1013 22:04:27.164196  496036 cache.go:58] Caching tarball of preloaded images
	I1013 22:04:27.164240  496036 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon
	I1013 22:04:27.164327  496036 preload.go:233] Found /home/jenkins/minikube-integration/21724-226873/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1013 22:04:27.164343  496036 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1013 22:04:27.164477  496036 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/default-k8s-diff-port-505851/config.json ...
	I1013 22:04:27.192103  496036 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon, skipping pull
	I1013 22:04:27.192129  496036 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 exists in daemon, skipping load
	I1013 22:04:27.192151  496036 cache.go:232] Successfully downloaded all kic artifacts
	I1013 22:04:27.192180  496036 start.go:360] acquireMachinesLock for default-k8s-diff-port-505851: {Name:mkaf957bc5ced7f5c930a2e33ff0ee7c156af144 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 22:04:27.192250  496036 start.go:364] duration metric: took 48.731µs to acquireMachinesLock for "default-k8s-diff-port-505851"
	I1013 22:04:27.192269  496036 start.go:96] Skipping create...Using existing machine configuration
	I1013 22:04:27.192275  496036 fix.go:54] fixHost starting: 
	I1013 22:04:27.192558  496036 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-505851 --format={{.State.Status}}
	I1013 22:04:27.214564  496036 fix.go:112] recreateIfNeeded on default-k8s-diff-port-505851: state=Stopped err=<nil>
	W1013 22:04:27.214600  496036 fix.go:138] unexpected machine state, will restart: <nil>
	I1013 22:04:26.881029  494020 addons.go:514] duration metric: took 1.97979899s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1013 22:04:27.366173  494020 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1013 22:04:27.371471  494020 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1013 22:04:27.371500  494020 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1013 22:04:27.866391  494020 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1013 22:04:27.871337  494020 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1013 22:04:27.872820  494020 api_server.go:141] control plane version: v1.34.1
	I1013 22:04:27.872853  494020 api_server.go:131] duration metric: took 1.007313107s to wait for apiserver health ...
	I1013 22:04:27.872865  494020 system_pods.go:43] waiting for kube-system pods to appear ...
	I1013 22:04:27.877852  494020 system_pods.go:59] 8 kube-system pods found
	I1013 22:04:27.877910  494020 system_pods.go:61] "coredns-66bc5c9577-br2pb" [531000de-cace-4ffd-ae65-51208d0783c5] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1013 22:04:27.877930  494020 system_pods.go:61] "etcd-newest-cni-843554" [b1660b76-27be-45d2-89da-274c5320b389] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1013 22:04:27.877954  494020 system_pods.go:61] "kindnet-x9k2d" [dda7fe66-1403-4701-b8af-1b3502336d9d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1013 22:04:27.877964  494020 system_pods.go:61] "kube-apiserver-newest-cni-843554" [0c7381fb-b918-4708-afa7-ad537bf1c3d0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1013 22:04:27.877973  494020 system_pods.go:61] "kube-controller-manager-newest-cni-843554" [b31669bc-26b0-45c5-aae7-2e7132dcfe60] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1013 22:04:27.877982  494020 system_pods.go:61] "kube-proxy-zgkgm" [f6dbeddd-feee-4c6f-a51b-add1412128a2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1013 22:04:27.878003  494020 system_pods.go:61] "kube-scheduler-newest-cni-843554" [dea0fcee-f00b-4190-a4fa-4bc097a9f7d0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1013 22:04:27.878016  494020 system_pods.go:61] "storage-provisioner" [2f536c11-96c7-4cb7-8128-eb53b3d44ce8] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1013 22:04:27.878025  494020 system_pods.go:74] duration metric: took 5.151561ms to wait for pod list to return data ...
	I1013 22:04:27.878039  494020 default_sa.go:34] waiting for default service account to be created ...
	I1013 22:04:27.883229  494020 default_sa.go:45] found service account: "default"
	I1013 22:04:27.883279  494020 default_sa.go:55] duration metric: took 5.230572ms for default service account to be created ...
	I1013 22:04:27.883296  494020 kubeadm.go:586] duration metric: took 2.982070176s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1013 22:04:27.883338  494020 node_conditions.go:102] verifying NodePressure condition ...
	I1013 22:04:27.886660  494020 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1013 22:04:27.886696  494020 node_conditions.go:123] node cpu capacity is 8
	I1013 22:04:27.886715  494020 node_conditions.go:105] duration metric: took 3.364375ms to run NodePressure ...
	I1013 22:04:27.886731  494020 start.go:241] waiting for startup goroutines ...
	I1013 22:04:27.886741  494020 start.go:246] waiting for cluster config update ...
	I1013 22:04:27.886757  494020 start.go:255] writing updated cluster config ...
	I1013 22:04:27.887108  494020 ssh_runner.go:195] Run: rm -f paused
	I1013 22:04:27.947927  494020 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1013 22:04:27.951086  494020 out.go:179] * Done! kubectl is now configured to use "newest-cni-843554" cluster and "default" namespace by default
	I1013 22:04:27.216426  496036 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-505851" ...
	I1013 22:04:27.216519  496036 cli_runner.go:164] Run: docker start default-k8s-diff-port-505851
	I1013 22:04:27.546236  496036 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-505851 --format={{.State.Status}}
	I1013 22:04:27.570902  496036 kic.go:430] container "default-k8s-diff-port-505851" state is running.
	I1013 22:04:27.571553  496036 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-505851
	I1013 22:04:27.593620  496036 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/default-k8s-diff-port-505851/config.json ...
	I1013 22:04:27.593950  496036 machine.go:93] provisionDockerMachine start ...
	I1013 22:04:27.594087  496036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-505851
	I1013 22:04:27.619583  496036 main.go:141] libmachine: Using SSH client type: native
	I1013 22:04:27.619942  496036 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1013 22:04:27.619963  496036 main.go:141] libmachine: About to run SSH command:
	hostname
	I1013 22:04:27.620721  496036 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49126->127.0.0.1:33098: read: connection reset by peer
	I1013 22:04:30.762693  496036 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-505851
	
	I1013 22:04:30.762729  496036 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-505851"
	I1013 22:04:30.762797  496036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-505851
	I1013 22:04:30.782887  496036 main.go:141] libmachine: Using SSH client type: native
	I1013 22:04:30.783227  496036 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1013 22:04:30.783251  496036 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-505851 && echo "default-k8s-diff-port-505851" | sudo tee /etc/hostname
	I1013 22:04:30.947431  496036 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-505851
	
	I1013 22:04:30.947526  496036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-505851
	I1013 22:04:30.973128  496036 main.go:141] libmachine: Using SSH client type: native
	I1013 22:04:30.973412  496036 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1013 22:04:30.973433  496036 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-505851' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-505851/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-505851' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1013 22:04:31.126563  496036 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 22:04:31.126604  496036 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21724-226873/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-226873/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-226873/.minikube}
	I1013 22:04:31.126653  496036 ubuntu.go:190] setting up certificates
	I1013 22:04:31.126667  496036 provision.go:84] configureAuth start
	I1013 22:04:31.126736  496036 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-505851
	I1013 22:04:31.148459  496036 provision.go:143] copyHostCerts
	I1013 22:04:31.148531  496036 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-226873/.minikube/ca.pem, removing ...
	I1013 22:04:31.148570  496036 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-226873/.minikube/ca.pem
	I1013 22:04:31.148663  496036 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-226873/.minikube/ca.pem (1078 bytes)
	I1013 22:04:31.148820  496036 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-226873/.minikube/cert.pem, removing ...
	I1013 22:04:31.148837  496036 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-226873/.minikube/cert.pem
	I1013 22:04:31.148884  496036 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-226873/.minikube/cert.pem (1123 bytes)
	I1013 22:04:31.148972  496036 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-226873/.minikube/key.pem, removing ...
	I1013 22:04:31.148984  496036 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-226873/.minikube/key.pem
	I1013 22:04:31.149055  496036 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-226873/.minikube/key.pem (1679 bytes)
	I1013 22:04:31.149147  496036 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-226873/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-505851 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-505851 localhost minikube]
	I1013 22:04:31.229565  496036 provision.go:177] copyRemoteCerts
	I1013 22:04:31.229629  496036 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1013 22:04:31.229698  496036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-505851
	I1013 22:04:31.254631  496036 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/default-k8s-diff-port-505851/id_rsa Username:docker}
	I1013 22:04:31.359736  496036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1013 22:04:31.381168  496036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1013 22:04:31.402220  496036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1013 22:04:31.424275  496036 provision.go:87] duration metric: took 297.595888ms to configureAuth
	I1013 22:04:31.424302  496036 ubuntu.go:206] setting minikube options for container-runtime
	I1013 22:04:31.424507  496036 config.go:182] Loaded profile config "default-k8s-diff-port-505851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:04:31.424626  496036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-505851
	I1013 22:04:31.447909  496036 main.go:141] libmachine: Using SSH client type: native
	I1013 22:04:31.448230  496036 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1013 22:04:31.448257  496036 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	
	
	==> CRI-O <==
	Oct 13 22:04:27 newest-cni-843554 crio[526]: time="2025-10-13T22:04:27.490113949Z" level=info msg="Running pod sandbox: kube-system/kindnet-x9k2d/POD" id=e427b562-7e03-46b4-ae08-10a1ee2bdf66 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 13 22:04:27 newest-cni-843554 crio[526]: time="2025-10-13T22:04:27.490214887Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:04:27 newest-cni-843554 crio[526]: time="2025-10-13T22:04:27.49344124Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 13 22:04:27 newest-cni-843554 crio[526]: time="2025-10-13T22:04:27.494377364Z" level=info msg="Ran pod sandbox 5069c1438b700ca32d4b8e9127b4885a9fe4e53213f2aca4d6ac4a6fe0935765 with infra container: kube-system/kube-proxy-zgkgm/POD" id=550c57b5-94f2-4c38-bb41-a6ecb98fb60c name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 13 22:04:27 newest-cni-843554 crio[526]: time="2025-10-13T22:04:27.494716782Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=e427b562-7e03-46b4-ae08-10a1ee2bdf66 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 13 22:04:27 newest-cni-843554 crio[526]: time="2025-10-13T22:04:27.496824486Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 13 22:04:27 newest-cni-843554 crio[526]: time="2025-10-13T22:04:27.497778758Z" level=info msg="Ran pod sandbox 7ce52899556293fa1e49cc1ec9069d3e66889cb8024620ffa3c65c843bdb15a0 with infra container: kube-system/kindnet-x9k2d/POD" id=e427b562-7e03-46b4-ae08-10a1ee2bdf66 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 13 22:04:27 newest-cni-843554 crio[526]: time="2025-10-13T22:04:27.502183491Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=da1095bf-a8ad-4dfe-b1c5-0d8526ce144e name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:04:27 newest-cni-843554 crio[526]: time="2025-10-13T22:04:27.502229207Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=45cf3cec-c6f1-4101-bd96-cc543a4b0474 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:04:27 newest-cni-843554 crio[526]: time="2025-10-13T22:04:27.50407325Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=f6fed489-6867-457c-88f4-6c6844daadb0 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:04:27 newest-cni-843554 crio[526]: time="2025-10-13T22:04:27.504793629Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=b69569cd-0b63-4601-a894-43019bdbda6a name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:04:27 newest-cni-843554 crio[526]: time="2025-10-13T22:04:27.506209485Z" level=info msg="Creating container: kube-system/kindnet-x9k2d/kindnet-cni" id=a22957b2-dbfa-4667-a738-0344319ba7a3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:04:27 newest-cni-843554 crio[526]: time="2025-10-13T22:04:27.506416762Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:04:27 newest-cni-843554 crio[526]: time="2025-10-13T22:04:27.507256757Z" level=info msg="Creating container: kube-system/kube-proxy-zgkgm/kube-proxy" id=87915dbc-7230-42f2-bc81-4329d236d569 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:04:27 newest-cni-843554 crio[526]: time="2025-10-13T22:04:27.509548927Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:04:27 newest-cni-843554 crio[526]: time="2025-10-13T22:04:27.512562185Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:04:27 newest-cni-843554 crio[526]: time="2025-10-13T22:04:27.513210562Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:04:27 newest-cni-843554 crio[526]: time="2025-10-13T22:04:27.516856371Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:04:27 newest-cni-843554 crio[526]: time="2025-10-13T22:04:27.517672063Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:04:27 newest-cni-843554 crio[526]: time="2025-10-13T22:04:27.544564647Z" level=info msg="Created container 6f0c64a9efcd8758124efb0152cbec061434e054bea5bbeb08cc3dad76c5e6c3: kube-system/kindnet-x9k2d/kindnet-cni" id=a22957b2-dbfa-4667-a738-0344319ba7a3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:04:27 newest-cni-843554 crio[526]: time="2025-10-13T22:04:27.545314192Z" level=info msg="Starting container: 6f0c64a9efcd8758124efb0152cbec061434e054bea5bbeb08cc3dad76c5e6c3" id=4b346e07-8cc2-45c3-a4f2-a4e5d311e248 name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 22:04:27 newest-cni-843554 crio[526]: time="2025-10-13T22:04:27.548029017Z" level=info msg="Started container" PID=1052 containerID=6f0c64a9efcd8758124efb0152cbec061434e054bea5bbeb08cc3dad76c5e6c3 description=kube-system/kindnet-x9k2d/kindnet-cni id=4b346e07-8cc2-45c3-a4f2-a4e5d311e248 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7ce52899556293fa1e49cc1ec9069d3e66889cb8024620ffa3c65c843bdb15a0
	Oct 13 22:04:27 newest-cni-843554 crio[526]: time="2025-10-13T22:04:27.551088674Z" level=info msg="Created container 06775b1276b3a84da4ec0e52de07dbe6eae776c5ea15d92e0dd6494cbf3f6044: kube-system/kube-proxy-zgkgm/kube-proxy" id=87915dbc-7230-42f2-bc81-4329d236d569 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:04:27 newest-cni-843554 crio[526]: time="2025-10-13T22:04:27.551845209Z" level=info msg="Starting container: 06775b1276b3a84da4ec0e52de07dbe6eae776c5ea15d92e0dd6494cbf3f6044" id=8bf965a9-8866-41c3-a302-af8fce749b5a name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 22:04:27 newest-cni-843554 crio[526]: time="2025-10-13T22:04:27.555576102Z" level=info msg="Started container" PID=1053 containerID=06775b1276b3a84da4ec0e52de07dbe6eae776c5ea15d92e0dd6494cbf3f6044 description=kube-system/kube-proxy-zgkgm/kube-proxy id=8bf965a9-8866-41c3-a302-af8fce749b5a name=/runtime.v1.RuntimeService/StartContainer sandboxID=5069c1438b700ca32d4b8e9127b4885a9fe4e53213f2aca4d6ac4a6fe0935765
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	06775b1276b3a       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   6 seconds ago       Running             kube-proxy                1                   5069c1438b700       kube-proxy-zgkgm                            kube-system
	6f0c64a9efcd8       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   6 seconds ago       Running             kindnet-cni               1                   7ce5289955629       kindnet-x9k2d                               kube-system
	327444d3eb25b       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   9 seconds ago       Running             kube-controller-manager   1                   749fcb4b90b08       kube-controller-manager-newest-cni-843554   kube-system
	0b277c65787c4       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   9 seconds ago       Running             kube-apiserver            1                   8a4a07dd64e45       kube-apiserver-newest-cni-843554            kube-system
	1f033e42eeaf3       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   9 seconds ago       Running             etcd                      1                   020e5d3e93776       etcd-newest-cni-843554                      kube-system
	be70fd932072f       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   9 seconds ago       Running             kube-scheduler            1                   4fd026e3ce246       kube-scheduler-newest-cni-843554            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-843554
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-843554
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=18273cc699fc357fbf4f93654efe4966698a9f22
	                    minikube.k8s.io/name=newest-cni-843554
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_13T22_04_01_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 13 Oct 2025 22:03:58 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-843554
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 13 Oct 2025 22:04:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 13 Oct 2025 22:04:26 +0000   Mon, 13 Oct 2025 22:03:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 13 Oct 2025 22:04:26 +0000   Mon, 13 Oct 2025 22:03:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 13 Oct 2025 22:04:26 +0000   Mon, 13 Oct 2025 22:03:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Mon, 13 Oct 2025 22:04:26 +0000   Mon, 13 Oct 2025 22:03:55 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    newest-cni-843554
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863444Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863444Ki
	  pods:               110
	System Info:
	  Machine ID:                 66f4c9907daa2bafbf78246f68ed04ba
	  System UUID:                251c90b5-21d9-4e58-8666-2d86d8084a26
	  Boot ID:                    981b04c4-08c1-4321-af44-0fa889f5f1d8
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-843554                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         34s
	  kube-system                 kindnet-x9k2d                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      29s
	  kube-system                 kube-apiserver-newest-cni-843554             250m (3%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-controller-manager-newest-cni-843554    200m (2%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-proxy-zgkgm                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-scheduler-newest-cni-843554             100m (1%)     0 (0%)      0 (0%)           0 (0%)         34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 27s                kube-proxy       
	  Normal  Starting                 6s                 kube-proxy       
	  Normal  Starting                 40s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  39s (x8 over 40s)  kubelet          Node newest-cni-843554 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    39s (x8 over 40s)  kubelet          Node newest-cni-843554 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     39s (x8 over 40s)  kubelet          Node newest-cni-843554 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    34s                kubelet          Node newest-cni-843554 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  34s                kubelet          Node newest-cni-843554 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     34s                kubelet          Node newest-cni-843554 status is now: NodeHasSufficientPID
	  Normal  Starting                 34s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           30s                node-controller  Node newest-cni-843554 event: Registered Node newest-cni-843554 in Controller
	  Normal  Starting                 10s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10s (x8 over 10s)  kubelet          Node newest-cni-843554 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10s (x8 over 10s)  kubelet          Node newest-cni-843554 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10s (x8 over 10s)  kubelet          Node newest-cni-843554 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5s                 node-controller  Node newest-cni-843554 event: Registered Node newest-cni-843554 in Controller
	
	
	==> dmesg <==
	[  +0.099627] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.027042] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.612816] kauditd_printk_skb: 47 callbacks suppressed
	[Oct13 21:21] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +1.026013] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +1.023870] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +1.023931] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +1.023844] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +1.023883] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +2.047734] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +4.031606] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +8.511203] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[ +16.382178] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[Oct13 21:22] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	
	
	==> etcd [1f033e42eeaf346d9e7d5c0daacca0dc86df0814805b2e603582086f4bf618cb] <==
	{"level":"warn","ts":"2025-10-13T22:04:25.630635Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:04:25.641150Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:04:25.648940Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:04:25.656175Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:04:25.662930Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:04:25.669523Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:04:25.676571Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:04:25.691030Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:04:25.695707Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:04:25.702785Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:04:25.710452Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:04:25.716902Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:04:25.723115Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:04:25.729294Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:04:25.736529Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:04:25.743512Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:04:25.749808Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:04:25.756374Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:04:25.763118Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:04:25.770409Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:04:25.777358Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:04:25.790293Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:04:25.797046Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:04:25.803591Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:04:25.861139Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54260","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 22:04:34 up  1:47,  0 user,  load average: 3.92, 3.58, 5.77
	Linux newest-cni-843554 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [6f0c64a9efcd8758124efb0152cbec061434e054bea5bbeb08cc3dad76c5e6c3] <==
	I1013 22:04:27.771019       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1013 22:04:27.771292       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1013 22:04:27.771423       1 main.go:148] setting mtu 1500 for CNI 
	I1013 22:04:27.771444       1 main.go:178] kindnetd IP family: "ipv4"
	I1013 22:04:27.771475       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-13T22:04:28Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1013 22:04:28.070804       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1013 22:04:28.071048       1 controller.go:381] "Waiting for informer caches to sync"
	I1013 22:04:28.071069       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1013 22:04:28.071256       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1013 22:04:28.471304       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1013 22:04:28.471339       1 metrics.go:72] Registering metrics
	I1013 22:04:28.472725       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [0b277c65787c450076a87589a9056f8e503435026df7a87ef8fcfd1f5fd85717] <==
	I1013 22:04:26.344704       1 autoregister_controller.go:144] Starting autoregister controller
	I1013 22:04:26.344711       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1013 22:04:26.344733       1 cache.go:39] Caches are synced for autoregister controller
	I1013 22:04:26.344799       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1013 22:04:26.344810       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1013 22:04:26.345002       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1013 22:04:26.345043       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1013 22:04:26.345079       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1013 22:04:26.349775       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1013 22:04:26.349941       1 policy_source.go:240] refreshing policies
	I1013 22:04:26.352804       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1013 22:04:26.353896       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1013 22:04:26.378036       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1013 22:04:26.648745       1 controller.go:667] quota admission added evaluator for: namespaces
	I1013 22:04:26.691492       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1013 22:04:26.716379       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1013 22:04:26.724176       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1013 22:04:26.732987       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1013 22:04:26.771822       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.71.79"}
	I1013 22:04:26.784389       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.85.147"}
	I1013 22:04:27.261725       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1013 22:04:29.777889       1 controller.go:667] quota admission added evaluator for: endpoints
	I1013 22:04:30.026564       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1013 22:04:30.026564       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1013 22:04:30.125912       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [327444d3eb25bfb0fe644674fa48e8f1aa6d8b136234fba36590d6457f450dc4] <==
	I1013 22:04:29.684688       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1013 22:04:29.685861       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1013 22:04:29.686148       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1013 22:04:29.686284       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1013 22:04:29.686344       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-843554"
	I1013 22:04:29.686482       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1013 22:04:29.689235       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1013 22:04:29.690531       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1013 22:04:29.690657       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1013 22:04:29.691734       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1013 22:04:29.693330       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1013 22:04:29.694542       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1013 22:04:29.695148       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1013 22:04:29.696877       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1013 22:04:29.700061       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1013 22:04:29.701237       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 22:04:29.701257       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1013 22:04:29.701270       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1013 22:04:29.703412       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1013 22:04:29.703909       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1013 22:04:29.705405       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1013 22:04:29.705712       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1013 22:04:29.707029       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1013 22:04:29.711768       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1013 22:04:29.725960       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [06775b1276b3a84da4ec0e52de07dbe6eae776c5ea15d92e0dd6494cbf3f6044] <==
	I1013 22:04:27.602215       1 server_linux.go:53] "Using iptables proxy"
	I1013 22:04:27.660250       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1013 22:04:27.761114       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1013 22:04:27.761160       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1013 22:04:27.761258       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1013 22:04:27.783613       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1013 22:04:27.783686       1 server_linux.go:132] "Using iptables Proxier"
	I1013 22:04:27.790823       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1013 22:04:27.791248       1 server.go:527] "Version info" version="v1.34.1"
	I1013 22:04:27.792142       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 22:04:27.794090       1 config.go:106] "Starting endpoint slice config controller"
	I1013 22:04:27.794140       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1013 22:04:27.794203       1 config.go:309] "Starting node config controller"
	I1013 22:04:27.794277       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1013 22:04:27.794288       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1013 22:04:27.794406       1 config.go:403] "Starting serviceCIDR config controller"
	I1013 22:04:27.794415       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1013 22:04:27.794111       1 config.go:200] "Starting service config controller"
	I1013 22:04:27.794481       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1013 22:04:27.894335       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1013 22:04:27.895561       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1013 22:04:27.895594       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [be70fd932072f322d31cdee2e908984ed89c0b1dba75f984223cd5fb43d68c52] <==
	I1013 22:04:25.508377       1 serving.go:386] Generated self-signed cert in-memory
	W1013 22:04:26.268479       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1013 22:04:26.268516       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1013 22:04:26.268530       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1013 22:04:26.268537       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1013 22:04:26.316602       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1013 22:04:26.316638       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 22:04:26.319638       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 22:04:26.319678       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 22:04:26.320202       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1013 22:04:26.320297       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1013 22:04:26.420449       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 13 22:04:26 newest-cni-843554 kubelet[674]: E1013 22:04:26.220843     674 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-843554\" not found" node="newest-cni-843554"
	Oct 13 22:04:26 newest-cni-843554 kubelet[674]: E1013 22:04:26.221045     674 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-843554\" not found" node="newest-cni-843554"
	Oct 13 22:04:26 newest-cni-843554 kubelet[674]: E1013 22:04:26.221149     674 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-843554\" not found" node="newest-cni-843554"
	Oct 13 22:04:26 newest-cni-843554 kubelet[674]: I1013 22:04:26.380438     674 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-843554"
	Oct 13 22:04:26 newest-cni-843554 kubelet[674]: I1013 22:04:26.386799     674 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-843554"
	Oct 13 22:04:26 newest-cni-843554 kubelet[674]: I1013 22:04:26.386881     674 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-843554"
	Oct 13 22:04:26 newest-cni-843554 kubelet[674]: I1013 22:04:26.386910     674 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 13 22:04:26 newest-cni-843554 kubelet[674]: I1013 22:04:26.387897     674 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 13 22:04:26 newest-cni-843554 kubelet[674]: E1013 22:04:26.393741     674 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-843554\" already exists" pod="kube-system/kube-apiserver-newest-cni-843554"
	Oct 13 22:04:26 newest-cni-843554 kubelet[674]: I1013 22:04:26.393774     674 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-843554"
	Oct 13 22:04:26 newest-cni-843554 kubelet[674]: E1013 22:04:26.400836     674 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-843554\" already exists" pod="kube-system/kube-controller-manager-newest-cni-843554"
	Oct 13 22:04:26 newest-cni-843554 kubelet[674]: I1013 22:04:26.400871     674 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-843554"
	Oct 13 22:04:26 newest-cni-843554 kubelet[674]: E1013 22:04:26.407277     674 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-843554\" already exists" pod="kube-system/kube-scheduler-newest-cni-843554"
	Oct 13 22:04:26 newest-cni-843554 kubelet[674]: I1013 22:04:26.407314     674 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-843554"
	Oct 13 22:04:26 newest-cni-843554 kubelet[674]: E1013 22:04:26.413406     674 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-843554\" already exists" pod="kube-system/etcd-newest-cni-843554"
	Oct 13 22:04:27 newest-cni-843554 kubelet[674]: I1013 22:04:27.177271     674 apiserver.go:52] "Watching apiserver"
	Oct 13 22:04:27 newest-cni-843554 kubelet[674]: I1013 22:04:27.278675     674 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 13 22:04:27 newest-cni-843554 kubelet[674]: I1013 22:04:27.292486     674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dda7fe66-1403-4701-b8af-1b3502336d9d-lib-modules\") pod \"kindnet-x9k2d\" (UID: \"dda7fe66-1403-4701-b8af-1b3502336d9d\") " pod="kube-system/kindnet-x9k2d"
	Oct 13 22:04:27 newest-cni-843554 kubelet[674]: I1013 22:04:27.292571     674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/dda7fe66-1403-4701-b8af-1b3502336d9d-cni-cfg\") pod \"kindnet-x9k2d\" (UID: \"dda7fe66-1403-4701-b8af-1b3502336d9d\") " pod="kube-system/kindnet-x9k2d"
	Oct 13 22:04:27 newest-cni-843554 kubelet[674]: I1013 22:04:27.292780     674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dda7fe66-1403-4701-b8af-1b3502336d9d-xtables-lock\") pod \"kindnet-x9k2d\" (UID: \"dda7fe66-1403-4701-b8af-1b3502336d9d\") " pod="kube-system/kindnet-x9k2d"
	Oct 13 22:04:27 newest-cni-843554 kubelet[674]: I1013 22:04:27.292870     674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f6dbeddd-feee-4c6f-a51b-add1412128a2-xtables-lock\") pod \"kube-proxy-zgkgm\" (UID: \"f6dbeddd-feee-4c6f-a51b-add1412128a2\") " pod="kube-system/kube-proxy-zgkgm"
	Oct 13 22:04:27 newest-cni-843554 kubelet[674]: I1013 22:04:27.292899     674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f6dbeddd-feee-4c6f-a51b-add1412128a2-lib-modules\") pod \"kube-proxy-zgkgm\" (UID: \"f6dbeddd-feee-4c6f-a51b-add1412128a2\") " pod="kube-system/kube-proxy-zgkgm"
	Oct 13 22:04:29 newest-cni-843554 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 13 22:04:29 newest-cni-843554 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 13 22:04:29 newest-cni-843554 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-843554 -n newest-cni-843554
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-843554 -n newest-cni-843554: exit status 2 (373.536476ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-843554 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-br2pb storage-provisioner dashboard-metrics-scraper-6ffb444bf9-wdvfg kubernetes-dashboard-855c9754f9-bxxrn
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-843554 describe pod coredns-66bc5c9577-br2pb storage-provisioner dashboard-metrics-scraper-6ffb444bf9-wdvfg kubernetes-dashboard-855c9754f9-bxxrn
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-843554 describe pod coredns-66bc5c9577-br2pb storage-provisioner dashboard-metrics-scraper-6ffb444bf9-wdvfg kubernetes-dashboard-855c9754f9-bxxrn: exit status 1 (94.020483ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-br2pb" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-wdvfg" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-bxxrn" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-843554 describe pod coredns-66bc5c9577-br2pb storage-provisioner dashboard-metrics-scraper-6ffb444bf9-wdvfg kubernetes-dashboard-855c9754f9-bxxrn: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (6.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-521669 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-521669 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (281.14257ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:04:29Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-521669 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-521669 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-521669 describe deploy/metrics-server -n kube-system: exit status 1 (67.34044ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-521669 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-521669
helpers_test.go:243: (dbg) docker inspect embed-certs-521669:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1baa373eead7e68f171d088aace3c344d1736cf1920b8b49bced7115bf5dc203",
	        "Created": "2025-10-13T22:03:15.556123483Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 477397,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-13T22:03:15.591708976Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:496f866fde942b6f85dc3d23428f27ed11a5cd69b522133c43b8dc97e7575c9e",
	        "ResolvConfPath": "/var/lib/docker/containers/1baa373eead7e68f171d088aace3c344d1736cf1920b8b49bced7115bf5dc203/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1baa373eead7e68f171d088aace3c344d1736cf1920b8b49bced7115bf5dc203/hostname",
	        "HostsPath": "/var/lib/docker/containers/1baa373eead7e68f171d088aace3c344d1736cf1920b8b49bced7115bf5dc203/hosts",
	        "LogPath": "/var/lib/docker/containers/1baa373eead7e68f171d088aace3c344d1736cf1920b8b49bced7115bf5dc203/1baa373eead7e68f171d088aace3c344d1736cf1920b8b49bced7115bf5dc203-json.log",
	        "Name": "/embed-certs-521669",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-521669:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-521669",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1baa373eead7e68f171d088aace3c344d1736cf1920b8b49bced7115bf5dc203",
	                "LowerDir": "/var/lib/docker/overlay2/3a20280ab14381960ae7156d30bd7b2fa35423fe9a356df896c104f200bd64da-init/diff:/var/lib/docker/overlay2/d6236b573fee274727d414fe7dfb0718c2f1a4b8ebed995b4196b3231a8d31a4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3a20280ab14381960ae7156d30bd7b2fa35423fe9a356df896c104f200bd64da/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3a20280ab14381960ae7156d30bd7b2fa35423fe9a356df896c104f200bd64da/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3a20280ab14381960ae7156d30bd7b2fa35423fe9a356df896c104f200bd64da/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-521669",
	                "Source": "/var/lib/docker/volumes/embed-certs-521669/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-521669",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-521669",
	                "name.minikube.sigs.k8s.io": "embed-certs-521669",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1713ecdfee11591e8415aab7a895eac697301871673ab6c2966bf9ba89c1328c",
	            "SandboxKey": "/var/run/docker/netns/1713ecdfee11",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33073"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33074"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33077"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33075"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33076"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-521669": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "a2:c2:73:38:23:08",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "50800b9f1c9d1d3bc768e42eef173bae32c640bbf4383e5f2ce56c38ad7a7349",
	                    "EndpointID": "eaa1f845cf29ee72454d15d461f5f8883995c80e7c146e501c4ba6b170832495",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-521669",
	                        "1baa373eead7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-521669 -n embed-certs-521669
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-521669 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-521669 logs -n 25: (1.090785305s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p old-k8s-version-534822                                                                                                                                                                                                                     │ old-k8s-version-534822       │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │ 13 Oct 25 22:03 UTC │
	│ start   │ -p embed-certs-521669 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-521669           │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │ 13 Oct 25 22:04 UTC │
	│ delete  │ -p kubernetes-upgrade-050146                                                                                                                                                                                                                  │ kubernetes-upgrade-050146    │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │ 13 Oct 25 22:03 UTC │
	│ delete  │ -p disable-driver-mounts-659143                                                                                                                                                                                                               │ disable-driver-mounts-659143 │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │ 13 Oct 25 22:03 UTC │
	│ start   │ -p default-k8s-diff-port-505851 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-505851 │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │ 13 Oct 25 22:03 UTC │
	│ image   │ no-preload-080337 image list --format=json                                                                                                                                                                                                    │ no-preload-080337            │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │ 13 Oct 25 22:03 UTC │
	│ pause   │ -p no-preload-080337 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-080337            │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │                     │
	│ delete  │ -p no-preload-080337                                                                                                                                                                                                                          │ no-preload-080337            │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │ 13 Oct 25 22:03 UTC │
	│ start   │ -p cert-expiration-894101 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-894101       │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │ 13 Oct 25 22:03 UTC │
	│ delete  │ -p no-preload-080337                                                                                                                                                                                                                          │ no-preload-080337            │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │ 13 Oct 25 22:03 UTC │
	│ start   │ -p newest-cni-843554 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-843554            │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │ 13 Oct 25 22:04 UTC │
	│ delete  │ -p cert-expiration-894101                                                                                                                                                                                                                     │ cert-expiration-894101       │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │ 13 Oct 25 22:03 UTC │
	│ start   │ -p auto-200102 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-200102                  │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │ 13 Oct 25 22:04 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-505851 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-505851 │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-843554 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-843554            │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-505851 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-505851 │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │ 13 Oct 25 22:04 UTC │
	│ stop    │ -p newest-cni-843554 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-843554            │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │ 13 Oct 25 22:04 UTC │
	│ addons  │ enable dashboard -p newest-cni-843554 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-843554            │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │ 13 Oct 25 22:04 UTC │
	│ start   │ -p newest-cni-843554 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-843554            │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │ 13 Oct 25 22:04 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-505851 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-505851 │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │ 13 Oct 25 22:04 UTC │
	│ start   │ -p default-k8s-diff-port-505851 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-505851 │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │                     │
	│ ssh     │ -p auto-200102 pgrep -a kubelet                                                                                                                                                                                                               │ auto-200102                  │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │ 13 Oct 25 22:04 UTC │
	│ image   │ newest-cni-843554 image list --format=json                                                                                                                                                                                                    │ newest-cni-843554            │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │ 13 Oct 25 22:04 UTC │
	│ pause   │ -p newest-cni-843554 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-843554            │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-521669 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-521669           │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 22:04:26
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 22:04:26.956859  496036 out.go:360] Setting OutFile to fd 1 ...
	I1013 22:04:26.957185  496036 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:04:26.957196  496036 out.go:374] Setting ErrFile to fd 2...
	I1013 22:04:26.957200  496036 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:04:26.957426  496036 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-226873/.minikube/bin
	I1013 22:04:26.958103  496036 out.go:368] Setting JSON to false
	I1013 22:04:26.959552  496036 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6415,"bootTime":1760386652,"procs":337,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1013 22:04:26.959670  496036 start.go:141] virtualization: kvm guest
	I1013 22:04:26.961750  496036 out.go:179] * [default-k8s-diff-port-505851] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1013 22:04:26.963349  496036 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 22:04:26.963411  496036 notify.go:220] Checking for updates...
	I1013 22:04:26.965760  496036 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 22:04:26.967036  496036 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-226873/kubeconfig
	I1013 22:04:26.968402  496036 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-226873/.minikube
	I1013 22:04:26.969929  496036 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1013 22:04:26.971386  496036 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 22:04:26.973194  496036 config.go:182] Loaded profile config "default-k8s-diff-port-505851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:04:26.973733  496036 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 22:04:26.999446  496036 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1013 22:04:26.999551  496036 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 22:04:27.071876  496036 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-13 22:04:27.059169849 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652166656 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1013 22:04:27.072057  496036 docker.go:318] overlay module found
	I1013 22:04:27.073848  496036 out.go:179] * Using the docker driver based on existing profile
	W1013 22:04:24.585424  487583 node_ready.go:57] node "auto-200102" has "Ready":"False" status (will retry)
	I1013 22:04:25.084904  487583 node_ready.go:49] node "auto-200102" is "Ready"
	I1013 22:04:25.084967  487583 node_ready.go:38] duration metric: took 11.503181949s for node "auto-200102" to be "Ready" ...
	I1013 22:04:25.085065  487583 api_server.go:52] waiting for apiserver process to appear ...
	I1013 22:04:25.085136  487583 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 22:04:25.100833  487583 api_server.go:72] duration metric: took 11.780125697s to wait for apiserver process to appear ...
	I1013 22:04:25.100936  487583 api_server.go:88] waiting for apiserver healthz status ...
	I1013 22:04:25.100960  487583 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1013 22:04:25.108756  487583 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1013 22:04:25.110322  487583 api_server.go:141] control plane version: v1.34.1
	I1013 22:04:25.110346  487583 api_server.go:131] duration metric: took 9.402756ms to wait for apiserver health ...
	I1013 22:04:25.110356  487583 system_pods.go:43] waiting for kube-system pods to appear ...
	I1013 22:04:25.115120  487583 system_pods.go:59] 8 kube-system pods found
	I1013 22:04:25.115171  487583 system_pods.go:61] "coredns-66bc5c9577-sdbk9" [8d36ca24-a3ba-4fbe-a653-00be0927dc3d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 22:04:25.115184  487583 system_pods.go:61] "etcd-auto-200102" [59039ed7-e15a-407e-841e-30a86d7b903f] Running
	I1013 22:04:25.115199  487583 system_pods.go:61] "kindnet-c9psd" [3b7c7e30-9d46-488f-bff2-977f91619b90] Running
	I1013 22:04:25.115205  487583 system_pods.go:61] "kube-apiserver-auto-200102" [1ebbd1d6-405c-44e8-a237-feb0705cc530] Running
	I1013 22:04:25.115211  487583 system_pods.go:61] "kube-controller-manager-auto-200102" [78c87a54-7514-4def-8203-3dbcd916a373] Running
	I1013 22:04:25.115222  487583 system_pods.go:61] "kube-proxy-m6qcc" [432fd165-7595-4ef5-b34b-2183518251e0] Running
	I1013 22:04:25.115227  487583 system_pods.go:61] "kube-scheduler-auto-200102" [d6496262-fff5-48e6-821f-32513cda17fc] Running
	I1013 22:04:25.115237  487583 system_pods.go:61] "storage-provisioner" [056b0c4f-6bc9-4461-b3da-18021518efe9] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 22:04:25.115246  487583 system_pods.go:74] duration metric: took 4.883205ms to wait for pod list to return data ...
	I1013 22:04:25.115261  487583 default_sa.go:34] waiting for default service account to be created ...
	I1013 22:04:25.118286  487583 default_sa.go:45] found service account: "default"
	I1013 22:04:25.118321  487583 default_sa.go:55] duration metric: took 3.046489ms for default service account to be created ...
	I1013 22:04:25.118333  487583 system_pods.go:116] waiting for k8s-apps to be running ...
	I1013 22:04:25.122038  487583 system_pods.go:86] 8 kube-system pods found
	I1013 22:04:25.122073  487583 system_pods.go:89] "coredns-66bc5c9577-sdbk9" [8d36ca24-a3ba-4fbe-a653-00be0927dc3d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 22:04:25.122080  487583 system_pods.go:89] "etcd-auto-200102" [59039ed7-e15a-407e-841e-30a86d7b903f] Running
	I1013 22:04:25.122088  487583 system_pods.go:89] "kindnet-c9psd" [3b7c7e30-9d46-488f-bff2-977f91619b90] Running
	I1013 22:04:25.122093  487583 system_pods.go:89] "kube-apiserver-auto-200102" [1ebbd1d6-405c-44e8-a237-feb0705cc530] Running
	I1013 22:04:25.122098  487583 system_pods.go:89] "kube-controller-manager-auto-200102" [78c87a54-7514-4def-8203-3dbcd916a373] Running
	I1013 22:04:25.122104  487583 system_pods.go:89] "kube-proxy-m6qcc" [432fd165-7595-4ef5-b34b-2183518251e0] Running
	I1013 22:04:25.122109  487583 system_pods.go:89] "kube-scheduler-auto-200102" [d6496262-fff5-48e6-821f-32513cda17fc] Running
	I1013 22:04:25.122117  487583 system_pods.go:89] "storage-provisioner" [056b0c4f-6bc9-4461-b3da-18021518efe9] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 22:04:25.122142  487583 retry.go:31] will retry after 273.856133ms: missing components: kube-dns
	I1013 22:04:25.402216  487583 system_pods.go:86] 8 kube-system pods found
	I1013 22:04:25.402254  487583 system_pods.go:89] "coredns-66bc5c9577-sdbk9" [8d36ca24-a3ba-4fbe-a653-00be0927dc3d] Running
	I1013 22:04:25.402263  487583 system_pods.go:89] "etcd-auto-200102" [59039ed7-e15a-407e-841e-30a86d7b903f] Running
	I1013 22:04:25.402269  487583 system_pods.go:89] "kindnet-c9psd" [3b7c7e30-9d46-488f-bff2-977f91619b90] Running
	I1013 22:04:25.402281  487583 system_pods.go:89] "kube-apiserver-auto-200102" [1ebbd1d6-405c-44e8-a237-feb0705cc530] Running
	I1013 22:04:25.402335  487583 system_pods.go:89] "kube-controller-manager-auto-200102" [78c87a54-7514-4def-8203-3dbcd916a373] Running
	I1013 22:04:25.402359  487583 system_pods.go:89] "kube-proxy-m6qcc" [432fd165-7595-4ef5-b34b-2183518251e0] Running
	I1013 22:04:25.402366  487583 system_pods.go:89] "kube-scheduler-auto-200102" [d6496262-fff5-48e6-821f-32513cda17fc] Running
	I1013 22:04:25.402376  487583 system_pods.go:89] "storage-provisioner" [056b0c4f-6bc9-4461-b3da-18021518efe9] Running
	I1013 22:04:25.402385  487583 system_pods.go:126] duration metric: took 284.045366ms to wait for k8s-apps to be running ...
	I1013 22:04:25.402397  487583 system_svc.go:44] waiting for kubelet service to be running ....
	I1013 22:04:25.402464  487583 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 22:04:25.419657  487583 system_svc.go:56] duration metric: took 17.243747ms WaitForService to wait for kubelet
	I1013 22:04:25.419690  487583 kubeadm.go:586] duration metric: took 12.098989753s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 22:04:25.419712  487583 node_conditions.go:102] verifying NodePressure condition ...
	I1013 22:04:25.423101  487583 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1013 22:04:25.423126  487583 node_conditions.go:123] node cpu capacity is 8
	I1013 22:04:25.423142  487583 node_conditions.go:105] duration metric: took 3.423638ms to run NodePressure ...
	I1013 22:04:25.423153  487583 start.go:241] waiting for startup goroutines ...
	I1013 22:04:25.423161  487583 start.go:246] waiting for cluster config update ...
	I1013 22:04:25.423171  487583 start.go:255] writing updated cluster config ...
	I1013 22:04:25.423441  487583 ssh_runner.go:195] Run: rm -f paused
	I1013 22:04:25.427987  487583 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 22:04:25.432645  487583 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-sdbk9" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:04:25.437593  487583 pod_ready.go:94] pod "coredns-66bc5c9577-sdbk9" is "Ready"
	I1013 22:04:25.437619  487583 pod_ready.go:86] duration metric: took 4.938693ms for pod "coredns-66bc5c9577-sdbk9" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:04:25.439782  487583 pod_ready.go:83] waiting for pod "etcd-auto-200102" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:04:25.443878  487583 pod_ready.go:94] pod "etcd-auto-200102" is "Ready"
	I1013 22:04:25.443902  487583 pod_ready.go:86] duration metric: took 4.093991ms for pod "etcd-auto-200102" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:04:25.446617  487583 pod_ready.go:83] waiting for pod "kube-apiserver-auto-200102" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:04:25.451319  487583 pod_ready.go:94] pod "kube-apiserver-auto-200102" is "Ready"
	I1013 22:04:25.451347  487583 pod_ready.go:86] duration metric: took 4.705052ms for pod "kube-apiserver-auto-200102" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:04:25.453421  487583 pod_ready.go:83] waiting for pod "kube-controller-manager-auto-200102" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:04:25.832825  487583 pod_ready.go:94] pod "kube-controller-manager-auto-200102" is "Ready"
	I1013 22:04:25.832855  487583 pod_ready.go:86] duration metric: took 379.41459ms for pod "kube-controller-manager-auto-200102" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:04:26.033261  487583 pod_ready.go:83] waiting for pod "kube-proxy-m6qcc" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:04:26.432688  487583 pod_ready.go:94] pod "kube-proxy-m6qcc" is "Ready"
	I1013 22:04:26.432727  487583 pod_ready.go:86] duration metric: took 399.435877ms for pod "kube-proxy-m6qcc" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:04:26.634447  487583 pod_ready.go:83] waiting for pod "kube-scheduler-auto-200102" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:04:27.034881  487583 pod_ready.go:94] pod "kube-scheduler-auto-200102" is "Ready"
	I1013 22:04:27.034917  487583 pod_ready.go:86] duration metric: took 400.438557ms for pod "kube-scheduler-auto-200102" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:04:27.034934  487583 pod_ready.go:40] duration metric: took 1.606887332s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 22:04:27.100050  487583 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1013 22:04:27.101928  487583 out.go:179] * Done! kubectl is now configured to use "auto-200102" cluster and "default" namespace by default
	I1013 22:04:27.075528  496036 start.go:305] selected driver: docker
	I1013 22:04:27.075556  496036 start.go:925] validating driver "docker" against &{Name:default-k8s-diff-port-505851 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-505851 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 22:04:27.075659  496036 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 22:04:27.076485  496036 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 22:04:27.157122  496036 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-13 22:04:27.144207388 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652166656 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1013 22:04:27.157498  496036 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 22:04:27.157536  496036 cni.go:84] Creating CNI manager for ""
	I1013 22:04:27.157595  496036 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 22:04:27.157655  496036 start.go:349] cluster config:
	{Name:default-k8s-diff-port-505851 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-505851 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 22:04:27.159479  496036 out.go:179] * Starting "default-k8s-diff-port-505851" primary control-plane node in "default-k8s-diff-port-505851" cluster
	I1013 22:04:27.161029  496036 cache.go:123] Beginning downloading kic base image for docker with crio
	I1013 22:04:27.162498  496036 out.go:179] * Pulling base image v0.0.48-1760363564-21724 ...
	I1013 22:04:27.164129  496036 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 22:04:27.164182  496036 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-226873/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1013 22:04:27.164196  496036 cache.go:58] Caching tarball of preloaded images
	I1013 22:04:27.164240  496036 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon
	I1013 22:04:27.164327  496036 preload.go:233] Found /home/jenkins/minikube-integration/21724-226873/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1013 22:04:27.164343  496036 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1013 22:04:27.164477  496036 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/default-k8s-diff-port-505851/config.json ...
	I1013 22:04:27.192103  496036 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon, skipping pull
	I1013 22:04:27.192129  496036 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 exists in daemon, skipping load
	I1013 22:04:27.192151  496036 cache.go:232] Successfully downloaded all kic artifacts
	I1013 22:04:27.192180  496036 start.go:360] acquireMachinesLock for default-k8s-diff-port-505851: {Name:mkaf957bc5ced7f5c930a2e33ff0ee7c156af144 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 22:04:27.192250  496036 start.go:364] duration metric: took 48.731µs to acquireMachinesLock for "default-k8s-diff-port-505851"
	I1013 22:04:27.192269  496036 start.go:96] Skipping create...Using existing machine configuration
	I1013 22:04:27.192275  496036 fix.go:54] fixHost starting: 
	I1013 22:04:27.192558  496036 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-505851 --format={{.State.Status}}
	I1013 22:04:27.214564  496036 fix.go:112] recreateIfNeeded on default-k8s-diff-port-505851: state=Stopped err=<nil>
	W1013 22:04:27.214600  496036 fix.go:138] unexpected machine state, will restart: <nil>
	I1013 22:04:26.881029  494020 addons.go:514] duration metric: took 1.97979899s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1013 22:04:27.366173  494020 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1013 22:04:27.371471  494020 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1013 22:04:27.371500  494020 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1013 22:04:27.866391  494020 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1013 22:04:27.871337  494020 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1013 22:04:27.872820  494020 api_server.go:141] control plane version: v1.34.1
	I1013 22:04:27.872853  494020 api_server.go:131] duration metric: took 1.007313107s to wait for apiserver health ...
	I1013 22:04:27.872865  494020 system_pods.go:43] waiting for kube-system pods to appear ...
	I1013 22:04:27.877852  494020 system_pods.go:59] 8 kube-system pods found
	I1013 22:04:27.877910  494020 system_pods.go:61] "coredns-66bc5c9577-br2pb" [531000de-cace-4ffd-ae65-51208d0783c5] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1013 22:04:27.877930  494020 system_pods.go:61] "etcd-newest-cni-843554" [b1660b76-27be-45d2-89da-274c5320b389] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1013 22:04:27.877954  494020 system_pods.go:61] "kindnet-x9k2d" [dda7fe66-1403-4701-b8af-1b3502336d9d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1013 22:04:27.877964  494020 system_pods.go:61] "kube-apiserver-newest-cni-843554" [0c7381fb-b918-4708-afa7-ad537bf1c3d0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1013 22:04:27.877973  494020 system_pods.go:61] "kube-controller-manager-newest-cni-843554" [b31669bc-26b0-45c5-aae7-2e7132dcfe60] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1013 22:04:27.877982  494020 system_pods.go:61] "kube-proxy-zgkgm" [f6dbeddd-feee-4c6f-a51b-add1412128a2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1013 22:04:27.878003  494020 system_pods.go:61] "kube-scheduler-newest-cni-843554" [dea0fcee-f00b-4190-a4fa-4bc097a9f7d0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1013 22:04:27.878016  494020 system_pods.go:61] "storage-provisioner" [2f536c11-96c7-4cb7-8128-eb53b3d44ce8] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1013 22:04:27.878025  494020 system_pods.go:74] duration metric: took 5.151561ms to wait for pod list to return data ...
	I1013 22:04:27.878039  494020 default_sa.go:34] waiting for default service account to be created ...
	I1013 22:04:27.883229  494020 default_sa.go:45] found service account: "default"
	I1013 22:04:27.883279  494020 default_sa.go:55] duration metric: took 5.230572ms for default service account to be created ...
	I1013 22:04:27.883296  494020 kubeadm.go:586] duration metric: took 2.982070176s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1013 22:04:27.883338  494020 node_conditions.go:102] verifying NodePressure condition ...
	I1013 22:04:27.886660  494020 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1013 22:04:27.886696  494020 node_conditions.go:123] node cpu capacity is 8
	I1013 22:04:27.886715  494020 node_conditions.go:105] duration metric: took 3.364375ms to run NodePressure ...
	I1013 22:04:27.886731  494020 start.go:241] waiting for startup goroutines ...
	I1013 22:04:27.886741  494020 start.go:246] waiting for cluster config update ...
	I1013 22:04:27.886757  494020 start.go:255] writing updated cluster config ...
	I1013 22:04:27.887108  494020 ssh_runner.go:195] Run: rm -f paused
	I1013 22:04:27.947927  494020 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1013 22:04:27.951086  494020 out.go:179] * Done! kubectl is now configured to use "newest-cni-843554" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 13 22:04:19 embed-certs-521669 crio[773]: time="2025-10-13T22:04:19.750109933Z" level=info msg="Starting container: 88e40d34fe9ed590a0be669433a31a3fc126da23ed338015493e1adab0299fb6" id=75c79ee7-bbf6-4a21-8cf9-bdfe6dbc58f3 name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 22:04:19 embed-certs-521669 crio[773]: time="2025-10-13T22:04:19.752286582Z" level=info msg="Started container" PID=1846 containerID=88e40d34fe9ed590a0be669433a31a3fc126da23ed338015493e1adab0299fb6 description=kube-system/coredns-66bc5c9577-kzq9t/coredns id=75c79ee7-bbf6-4a21-8cf9-bdfe6dbc58f3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=17e2773a238daf8f7e53de0bfbc24d4a09850b6d88a3a98d15c29e6b14b47a12
	Oct 13 22:04:22 embed-certs-521669 crio[773]: time="2025-10-13T22:04:22.25988432Z" level=info msg="Running pod sandbox: default/busybox/POD" id=23a7774f-4d00-44a2-988b-1e286429ea3b name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 13 22:04:22 embed-certs-521669 crio[773]: time="2025-10-13T22:04:22.2600188Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:04:22 embed-certs-521669 crio[773]: time="2025-10-13T22:04:22.266309826Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:6c412d7c9e4e2aa8a1b0669b2458dfc3906b4356ffeace19771b7ec550d03dc8 UID:e6166149-7670-4cf2-b4fb-21490d127189 NetNS:/var/run/netns/0cf3f515-b6c4-48a1-b541-090ecf50441c Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008b278}] Aliases:map[]}"
	Oct 13 22:04:22 embed-certs-521669 crio[773]: time="2025-10-13T22:04:22.266373295Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 13 22:04:22 embed-certs-521669 crio[773]: time="2025-10-13T22:04:22.278313182Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:6c412d7c9e4e2aa8a1b0669b2458dfc3906b4356ffeace19771b7ec550d03dc8 UID:e6166149-7670-4cf2-b4fb-21490d127189 NetNS:/var/run/netns/0cf3f515-b6c4-48a1-b541-090ecf50441c Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008b278}] Aliases:map[]}"
	Oct 13 22:04:22 embed-certs-521669 crio[773]: time="2025-10-13T22:04:22.27844301Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 13 22:04:22 embed-certs-521669 crio[773]: time="2025-10-13T22:04:22.279193041Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 13 22:04:22 embed-certs-521669 crio[773]: time="2025-10-13T22:04:22.280064257Z" level=info msg="Ran pod sandbox 6c412d7c9e4e2aa8a1b0669b2458dfc3906b4356ffeace19771b7ec550d03dc8 with infra container: default/busybox/POD" id=23a7774f-4d00-44a2-988b-1e286429ea3b name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 13 22:04:22 embed-certs-521669 crio[773]: time="2025-10-13T22:04:22.281528228Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=7bbdad1c-4b8e-4803-a361-47c914d235d0 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:04:22 embed-certs-521669 crio[773]: time="2025-10-13T22:04:22.281689215Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=7bbdad1c-4b8e-4803-a361-47c914d235d0 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:04:22 embed-certs-521669 crio[773]: time="2025-10-13T22:04:22.281728954Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=7bbdad1c-4b8e-4803-a361-47c914d235d0 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:04:22 embed-certs-521669 crio[773]: time="2025-10-13T22:04:22.28258529Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=2479bd41-4ceb-4f8a-9c8a-fe3b51d710e7 name=/runtime.v1.ImageService/PullImage
	Oct 13 22:04:22 embed-certs-521669 crio[773]: time="2025-10-13T22:04:22.285097719Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 13 22:04:23 embed-certs-521669 crio[773]: time="2025-10-13T22:04:23.006005299Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=2479bd41-4ceb-4f8a-9c8a-fe3b51d710e7 name=/runtime.v1.ImageService/PullImage
	Oct 13 22:04:23 embed-certs-521669 crio[773]: time="2025-10-13T22:04:23.006798899Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=73291bc8-541c-415f-afe3-d79bd806e8af name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:04:23 embed-certs-521669 crio[773]: time="2025-10-13T22:04:23.008265568Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f5596d62-5c39-49b8-9ce8-a70289a7c942 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:04:23 embed-certs-521669 crio[773]: time="2025-10-13T22:04:23.011933569Z" level=info msg="Creating container: default/busybox/busybox" id=a8e74641-9b1a-44c6-8d6a-c7bed89cd9df name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:04:23 embed-certs-521669 crio[773]: time="2025-10-13T22:04:23.012744941Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:04:23 embed-certs-521669 crio[773]: time="2025-10-13T22:04:23.016139124Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:04:23 embed-certs-521669 crio[773]: time="2025-10-13T22:04:23.01658036Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:04:23 embed-certs-521669 crio[773]: time="2025-10-13T22:04:23.046460556Z" level=info msg="Created container 710044f9f7501e43c81247e9c92b54cb8f1f3d1f1c30ea87481dcc1f79025582: default/busybox/busybox" id=a8e74641-9b1a-44c6-8d6a-c7bed89cd9df name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:04:23 embed-certs-521669 crio[773]: time="2025-10-13T22:04:23.047120607Z" level=info msg="Starting container: 710044f9f7501e43c81247e9c92b54cb8f1f3d1f1c30ea87481dcc1f79025582" id=a991fab3-30e6-4742-bc87-db2748e7d457 name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 22:04:23 embed-certs-521669 crio[773]: time="2025-10-13T22:04:23.048923131Z" level=info msg="Started container" PID=1922 containerID=710044f9f7501e43c81247e9c92b54cb8f1f3d1f1c30ea87481dcc1f79025582 description=default/busybox/busybox id=a991fab3-30e6-4742-bc87-db2748e7d457 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6c412d7c9e4e2aa8a1b0669b2458dfc3906b4356ffeace19771b7ec550d03dc8
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	710044f9f7501       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago        Running             busybox                   0                   6c412d7c9e4e2       busybox                                      default
	88e40d34fe9ed       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      10 seconds ago       Running             coredns                   0                   17e2773a238da       coredns-66bc5c9577-kzq9t                     kube-system
	17659898a472d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 seconds ago       Running             storage-provisioner       0                   e0f7439631c4a       storage-provisioner                          kube-system
	56eba5ba6692b       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      51 seconds ago       Running             kube-proxy                0                   63ea95da52df1       kube-proxy-jjzrs                             kube-system
	88752e26dfd55       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      52 seconds ago       Running             kindnet-cni               0                   59d9846e63197       kindnet-rqr6b                                kube-system
	46b47a26a991a       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      About a minute ago   Running             kube-scheduler            0                   a25fa7c3e6e18       kube-scheduler-embed-certs-521669            kube-system
	bd943d1630504       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      About a minute ago   Running             kube-apiserver            0                   ae343c8da47a2       kube-apiserver-embed-certs-521669            kube-system
	7cbef9457cf31       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      About a minute ago   Running             kube-controller-manager   0                   7c3e67f29bcc3       kube-controller-manager-embed-certs-521669   kube-system
	6975b32448bd6       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      About a minute ago   Running             etcd                      0                   7f41659391275       etcd-embed-certs-521669                      kube-system
	
	
	==> coredns [88e40d34fe9ed590a0be669433a31a3fc126da23ed338015493e1adab0299fb6] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53642 - 26108 "HINFO IN 4455606961966173220.7634270630314790083. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.065410841s
	
	
	==> describe nodes <==
	Name:               embed-certs-521669
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-521669
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=18273cc699fc357fbf4f93654efe4966698a9f22
	                    minikube.k8s.io/name=embed-certs-521669
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_13T22_03_33_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 13 Oct 2025 22:03:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-521669
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 13 Oct 2025 22:04:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 13 Oct 2025 22:04:19 +0000   Mon, 13 Oct 2025 22:03:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 13 Oct 2025 22:04:19 +0000   Mon, 13 Oct 2025 22:03:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 13 Oct 2025 22:04:19 +0000   Mon, 13 Oct 2025 22:03:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 13 Oct 2025 22:04:19 +0000   Mon, 13 Oct 2025 22:04:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    embed-certs-521669
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863444Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863444Ki
	  pods:               110
	System Info:
	  Machine ID:                 66f4c9907daa2bafbf78246f68ed04ba
	  System UUID:                3d04c77e-97c4-4463-b7c6-6837fef5c3d8
	  Boot ID:                    981b04c4-08c1-4321-af44-0fa889f5f1d8
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-66bc5c9577-kzq9t                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     52s
	  kube-system                 etcd-embed-certs-521669                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         58s
	  kube-system                 kindnet-rqr6b                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      52s
	  kube-system                 kube-apiserver-embed-certs-521669             250m (3%)     0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 kube-controller-manager-embed-certs-521669    200m (2%)     0 (0%)      0 (0%)           0 (0%)         58s
	  kube-system                 kube-proxy-jjzrs                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kube-system                 kube-scheduler-embed-certs-521669             100m (1%)     0 (0%)      0 (0%)           0 (0%)         58s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 51s                kube-proxy       
	  Normal  Starting                 62s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  62s (x8 over 62s)  kubelet          Node embed-certs-521669 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    62s (x8 over 62s)  kubelet          Node embed-certs-521669 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     62s (x8 over 62s)  kubelet          Node embed-certs-521669 status is now: NodeHasSufficientPID
	  Normal  Starting                 58s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  58s                kubelet          Node embed-certs-521669 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    58s                kubelet          Node embed-certs-521669 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     58s                kubelet          Node embed-certs-521669 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           53s                node-controller  Node embed-certs-521669 event: Registered Node embed-certs-521669 in Controller
	  Normal  NodeReady                11s                kubelet          Node embed-certs-521669 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.099627] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.027042] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.612816] kauditd_printk_skb: 47 callbacks suppressed
	[Oct13 21:21] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +1.026013] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +1.023870] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +1.023931] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +1.023844] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +1.023883] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +2.047734] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +4.031606] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +8.511203] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[ +16.382178] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[Oct13 21:22] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	
	
	==> etcd [6975b32448bd61badb1a57349e47da90d9608f4b2947a4af4f704bf1779c466e] <==
	{"level":"warn","ts":"2025-10-13T22:03:29.610242Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:03:29.616951Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:03:29.623680Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:03:29.629689Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45936","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:03:29.636221Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:03:29.642940Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:03:29.650626Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:03:29.657123Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:03:29.663762Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:03:29.670457Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:03:29.677070Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46072","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:03:29.685125Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:03:29.692203Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:03:29.707839Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:03:29.716230Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:03:29.724295Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:03:29.734759Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:03:29.743170Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46186","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:03:29.760233Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46208","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:03:29.766929Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46226","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:03:29.775166Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:03:29.823620Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46276","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-13T22:03:42.088158Z","caller":"traceutil/trace.go:172","msg":"trace[2038661600] transaction","detail":"{read_only:false; response_revision:422; number_of_response:1; }","duration":"141.569948ms","start":"2025-10-13T22:03:41.946559Z","end":"2025-10-13T22:03:42.088129Z","steps":["trace[2038661600] 'process raft request'  (duration: 126.218759ms)","trace[2038661600] 'compare'  (duration: 15.137125ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-13T22:03:44.082622Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"132.677499ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-13T22:03:44.082724Z","caller":"traceutil/trace.go:172","msg":"trace[839196056] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:425; }","duration":"132.814678ms","start":"2025-10-13T22:03:43.949889Z","end":"2025-10-13T22:03:44.082703Z","steps":["trace[839196056] 'range keys from in-memory index tree'  (duration: 132.603902ms)"],"step_count":1}
	
	
	==> kernel <==
	 22:04:30 up  1:46,  0 user,  load average: 4.00, 3.59, 5.79
	Linux embed-certs-521669 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [88752e26dfd55e9fde046338af802780e50dd15b2c3738a63157a10d96c87f21] <==
	I1013 22:03:38.768770       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1013 22:03:38.769186       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1013 22:03:38.769332       1 main.go:148] setting mtu 1500 for CNI 
	I1013 22:03:38.769353       1 main.go:178] kindnetd IP family: "ipv4"
	I1013 22:03:38.769380       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-13T22:03:38Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1013 22:03:38.973543       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1013 22:03:38.973582       1 controller.go:381] "Waiting for informer caches to sync"
	I1013 22:03:38.973594       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1013 22:03:38.973761       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1013 22:04:08.974573       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1013 22:04:08.974665       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1013 22:04:08.974676       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1013 22:04:08.974779       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1013 22:04:10.573938       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1013 22:04:10.573970       1 metrics.go:72] Registering metrics
	I1013 22:04:10.574072       1 controller.go:711] "Syncing nftables rules"
	I1013 22:04:18.974347       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1013 22:04:18.974417       1 main.go:301] handling current node
	I1013 22:04:28.977170       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1013 22:04:28.977294       1 main.go:301] handling current node
	
	
	==> kube-apiserver [bd943d16305045d1c6030b2d578a0ad8b128c552b75407364db5c1da5b045fb7] <==
	I1013 22:03:30.444335       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1013 22:03:30.448858       1 controller.go:667] quota admission added evaluator for: namespaces
	I1013 22:03:30.451087       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1013 22:03:30.451225       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1013 22:03:30.455910       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1013 22:03:30.461742       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1013 22:03:30.461868       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1013 22:03:30.611013       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1013 22:03:31.310834       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1013 22:03:31.314775       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1013 22:03:31.314795       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1013 22:03:31.799621       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1013 22:03:31.838123       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1013 22:03:31.915038       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1013 22:03:31.922849       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1013 22:03:31.924315       1 controller.go:667] quota admission added evaluator for: endpoints
	I1013 22:03:31.929050       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1013 22:03:32.350867       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1013 22:03:32.911035       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1013 22:03:32.921648       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1013 22:03:32.932212       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1013 22:03:38.004028       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1013 22:03:38.012097       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1013 22:03:38.051708       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1013 22:03:38.466946       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [7cbef9457cf311abf3a7d8e23793d87be2c574c10b79e1916ce5a32462a7b295] <==
	I1013 22:03:37.348888       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1013 22:03:37.348943       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1013 22:03:37.349091       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-521669"
	I1013 22:03:37.349152       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1013 22:03:37.349197       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1013 22:03:37.349216       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1013 22:03:37.350018       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1013 22:03:37.350034       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1013 22:03:37.351222       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1013 22:03:37.352439       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1013 22:03:37.354949       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1013 22:03:37.355143       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1013 22:03:37.356238       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1013 22:03:37.356263       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1013 22:03:37.356298       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1013 22:03:37.356347       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1013 22:03:37.356356       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1013 22:03:37.356363       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1013 22:03:37.358578       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1013 22:03:37.360883       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1013 22:03:37.363619       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-521669" podCIDRs=["10.244.0.0/24"]
	I1013 22:03:37.366729       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1013 22:03:37.369060       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1013 22:03:37.373366       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 22:04:22.356518       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [56eba5ba6692b11bdc9b828a251eec4e41b8f5c59b45a1d8973bee9b0f3babd6] <==
	I1013 22:03:39.115325       1 server_linux.go:53] "Using iptables proxy"
	I1013 22:03:39.203411       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1013 22:03:39.303574       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1013 22:03:39.303641       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1013 22:03:39.303758       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1013 22:03:39.330516       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1013 22:03:39.330573       1 server_linux.go:132] "Using iptables Proxier"
	I1013 22:03:39.339370       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1013 22:03:39.339793       1 server.go:527] "Version info" version="v1.34.1"
	I1013 22:03:39.339825       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 22:03:39.341631       1 config.go:200] "Starting service config controller"
	I1013 22:03:39.341903       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1013 22:03:39.341908       1 config.go:403] "Starting serviceCIDR config controller"
	I1013 22:03:39.341930       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1013 22:03:39.341965       1 config.go:106] "Starting endpoint slice config controller"
	I1013 22:03:39.341971       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1013 22:03:39.341978       1 config.go:309] "Starting node config controller"
	I1013 22:03:39.341986       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1013 22:03:39.342037       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1013 22:03:39.442517       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1013 22:03:39.442535       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1013 22:03:39.442870       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [46b47a26a991a2bc24fc860a44d85b1ddea77602f7567b5f2b9732c82b4a16ce] <==
	I1013 22:03:30.839556       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 22:03:30.841429       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 22:03:30.841460       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 22:03:30.841780       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1013 22:03:30.841860       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1013 22:03:30.843038       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1013 22:03:30.843494       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1013 22:03:30.844610       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1013 22:03:30.844696       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1013 22:03:30.844714       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1013 22:03:30.844798       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1013 22:03:30.844856       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1013 22:03:30.844925       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1013 22:03:30.844930       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1013 22:03:30.845028       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1013 22:03:30.845167       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1013 22:03:30.845169       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1013 22:03:30.845201       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1013 22:03:30.845300       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1013 22:03:30.845383       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1013 22:03:30.845429       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1013 22:03:30.845449       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1013 22:03:30.845503       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1013 22:03:30.845581       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	I1013 22:03:32.042331       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 13 22:03:33 embed-certs-521669 kubelet[1316]: I1013 22:03:33.884706    1316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-embed-certs-521669" podStartSLOduration=1.884679779 podStartE2EDuration="1.884679779s" podCreationTimestamp="2025-10-13 22:03:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 22:03:33.884662837 +0000 UTC m=+1.197033273" watchObservedRunningTime="2025-10-13 22:03:33.884679779 +0000 UTC m=+1.197050215"
	Oct 13 22:03:33 embed-certs-521669 kubelet[1316]: I1013 22:03:33.884844    1316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-embed-certs-521669" podStartSLOduration=1.8848317890000001 podStartE2EDuration="1.884831789s" podCreationTimestamp="2025-10-13 22:03:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 22:03:33.871405023 +0000 UTC m=+1.183775459" watchObservedRunningTime="2025-10-13 22:03:33.884831789 +0000 UTC m=+1.197202225"
	Oct 13 22:03:37 embed-certs-521669 kubelet[1316]: I1013 22:03:37.427440    1316 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 13 22:03:37 embed-certs-521669 kubelet[1316]: I1013 22:03:37.428389    1316 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 13 22:03:38 embed-certs-521669 kubelet[1316]: I1013 22:03:38.096199    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/511ca726-6516-4c5b-8bb4-f76d6e83ef94-kube-proxy\") pod \"kube-proxy-jjzrs\" (UID: \"511ca726-6516-4c5b-8bb4-f76d6e83ef94\") " pod="kube-system/kube-proxy-jjzrs"
	Oct 13 22:03:38 embed-certs-521669 kubelet[1316]: I1013 22:03:38.096260    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gp9t\" (UniqueName: \"kubernetes.io/projected/511ca726-6516-4c5b-8bb4-f76d6e83ef94-kube-api-access-8gp9t\") pod \"kube-proxy-jjzrs\" (UID: \"511ca726-6516-4c5b-8bb4-f76d6e83ef94\") " pod="kube-system/kube-proxy-jjzrs"
	Oct 13 22:03:38 embed-certs-521669 kubelet[1316]: I1013 22:03:38.096306    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/511ca726-6516-4c5b-8bb4-f76d6e83ef94-xtables-lock\") pod \"kube-proxy-jjzrs\" (UID: \"511ca726-6516-4c5b-8bb4-f76d6e83ef94\") " pod="kube-system/kube-proxy-jjzrs"
	Oct 13 22:03:38 embed-certs-521669 kubelet[1316]: I1013 22:03:38.096335    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/511ca726-6516-4c5b-8bb4-f76d6e83ef94-lib-modules\") pod \"kube-proxy-jjzrs\" (UID: \"511ca726-6516-4c5b-8bb4-f76d6e83ef94\") " pod="kube-system/kube-proxy-jjzrs"
	Oct 13 22:03:38 embed-certs-521669 kubelet[1316]: I1013 22:03:38.196638    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4thms\" (UniqueName: \"kubernetes.io/projected/83ca9459-7636-4391-814b-274ff7e06bc7-kube-api-access-4thms\") pod \"kindnet-rqr6b\" (UID: \"83ca9459-7636-4391-814b-274ff7e06bc7\") " pod="kube-system/kindnet-rqr6b"
	Oct 13 22:03:38 embed-certs-521669 kubelet[1316]: I1013 22:03:38.196740    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/83ca9459-7636-4391-814b-274ff7e06bc7-cni-cfg\") pod \"kindnet-rqr6b\" (UID: \"83ca9459-7636-4391-814b-274ff7e06bc7\") " pod="kube-system/kindnet-rqr6b"
	Oct 13 22:03:38 embed-certs-521669 kubelet[1316]: I1013 22:03:38.196765    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/83ca9459-7636-4391-814b-274ff7e06bc7-lib-modules\") pod \"kindnet-rqr6b\" (UID: \"83ca9459-7636-4391-814b-274ff7e06bc7\") " pod="kube-system/kindnet-rqr6b"
	Oct 13 22:03:38 embed-certs-521669 kubelet[1316]: I1013 22:03:38.196825    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/83ca9459-7636-4391-814b-274ff7e06bc7-xtables-lock\") pod \"kindnet-rqr6b\" (UID: \"83ca9459-7636-4391-814b-274ff7e06bc7\") " pod="kube-system/kindnet-rqr6b"
	Oct 13 22:03:38 embed-certs-521669 kubelet[1316]: E1013 22:03:38.207326    1316 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Oct 13 22:03:38 embed-certs-521669 kubelet[1316]: E1013 22:03:38.207375    1316 projected.go:196] Error preparing data for projected volume kube-api-access-8gp9t for pod kube-system/kube-proxy-jjzrs: configmap "kube-root-ca.crt" not found
	Oct 13 22:03:38 embed-certs-521669 kubelet[1316]: E1013 22:03:38.207546    1316 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/511ca726-6516-4c5b-8bb4-f76d6e83ef94-kube-api-access-8gp9t podName:511ca726-6516-4c5b-8bb4-f76d6e83ef94 nodeName:}" failed. No retries permitted until 2025-10-13 22:03:38.70751265 +0000 UTC m=+6.019883086 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-8gp9t" (UniqueName: "kubernetes.io/projected/511ca726-6516-4c5b-8bb4-f76d6e83ef94-kube-api-access-8gp9t") pod "kube-proxy-jjzrs" (UID: "511ca726-6516-4c5b-8bb4-f76d6e83ef94") : configmap "kube-root-ca.crt" not found
	Oct 13 22:03:38 embed-certs-521669 kubelet[1316]: I1013 22:03:38.848817    1316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-rqr6b" podStartSLOduration=0.848792096 podStartE2EDuration="848.792096ms" podCreationTimestamp="2025-10-13 22:03:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 22:03:38.848467176 +0000 UTC m=+6.160837610" watchObservedRunningTime="2025-10-13 22:03:38.848792096 +0000 UTC m=+6.161162591"
	Oct 13 22:03:39 embed-certs-521669 kubelet[1316]: I1013 22:03:39.837958    1316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jjzrs" podStartSLOduration=1.837926749 podStartE2EDuration="1.837926749s" podCreationTimestamp="2025-10-13 22:03:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 22:03:39.837574275 +0000 UTC m=+7.149944694" watchObservedRunningTime="2025-10-13 22:03:39.837926749 +0000 UTC m=+7.150297188"
	Oct 13 22:04:19 embed-certs-521669 kubelet[1316]: I1013 22:04:19.362890    1316 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 13 22:04:19 embed-certs-521669 kubelet[1316]: I1013 22:04:19.488543    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzb26\" (UniqueName: \"kubernetes.io/projected/de4a6bd9-ffde-4056-a47b-41dd5db09e0f-kube-api-access-tzb26\") pod \"coredns-66bc5c9577-kzq9t\" (UID: \"de4a6bd9-ffde-4056-a47b-41dd5db09e0f\") " pod="kube-system/coredns-66bc5c9577-kzq9t"
	Oct 13 22:04:19 embed-certs-521669 kubelet[1316]: I1013 22:04:19.488606    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/de4a6bd9-ffde-4056-a47b-41dd5db09e0f-config-volume\") pod \"coredns-66bc5c9577-kzq9t\" (UID: \"de4a6bd9-ffde-4056-a47b-41dd5db09e0f\") " pod="kube-system/coredns-66bc5c9577-kzq9t"
	Oct 13 22:04:19 embed-certs-521669 kubelet[1316]: I1013 22:04:19.488640    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/9c70ca0c-a52a-43a0-8221-0c1ecd43c72a-tmp\") pod \"storage-provisioner\" (UID: \"9c70ca0c-a52a-43a0-8221-0c1ecd43c72a\") " pod="kube-system/storage-provisioner"
	Oct 13 22:04:19 embed-certs-521669 kubelet[1316]: I1013 22:04:19.488660    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdrmt\" (UniqueName: \"kubernetes.io/projected/9c70ca0c-a52a-43a0-8221-0c1ecd43c72a-kube-api-access-sdrmt\") pod \"storage-provisioner\" (UID: \"9c70ca0c-a52a-43a0-8221-0c1ecd43c72a\") " pod="kube-system/storage-provisioner"
	Oct 13 22:04:19 embed-certs-521669 kubelet[1316]: I1013 22:04:19.931653    1316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=40.931628128 podStartE2EDuration="40.931628128s" podCreationTimestamp="2025-10-13 22:03:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 22:04:19.9313547 +0000 UTC m=+47.243725136" watchObservedRunningTime="2025-10-13 22:04:19.931628128 +0000 UTC m=+47.243998564"
	Oct 13 22:04:19 embed-certs-521669 kubelet[1316]: I1013 22:04:19.943408    1316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-kzq9t" podStartSLOduration=41.943386195 podStartE2EDuration="41.943386195s" podCreationTimestamp="2025-10-13 22:03:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 22:04:19.942469251 +0000 UTC m=+47.254839686" watchObservedRunningTime="2025-10-13 22:04:19.943386195 +0000 UTC m=+47.255756630"
	Oct 13 22:04:22 embed-certs-521669 kubelet[1316]: I1013 22:04:22.005898    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rzbb\" (UniqueName: \"kubernetes.io/projected/e6166149-7670-4cf2-b4fb-21490d127189-kube-api-access-7rzbb\") pod \"busybox\" (UID: \"e6166149-7670-4cf2-b4fb-21490d127189\") " pod="default/busybox"
	
	
	==> storage-provisioner [17659898a472dc3073d1b404760e07806c327a51b22911f47fbe7baa7fec3816] <==
	I1013 22:04:19.759489       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1013 22:04:19.767612       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1013 22:04:19.767674       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1013 22:04:19.769926       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:04:19.776965       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1013 22:04:19.777217       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1013 22:04:19.777350       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2713828c-d71d-46c7-8af8-1b55a2cb8cd7", APIVersion:"v1", ResourceVersion:"453", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-521669_132f35db-a6f5-4645-8a54-f9731ee1b0c4 became leader
	I1013 22:04:19.777390       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-521669_132f35db-a6f5-4645-8a54-f9731ee1b0c4!
	W1013 22:04:19.779699       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:04:19.783431       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1013 22:04:19.877915       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-521669_132f35db-a6f5-4645-8a54-f9731ee1b0c4!
	W1013 22:04:21.786952       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:04:21.793176       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:04:23.797164       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:04:23.801369       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:04:25.805028       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:04:25.809833       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:04:27.813625       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:04:27.818195       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:04:29.821414       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:04:29.828147       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-521669 -n embed-certs-521669
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-521669 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (6.62s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-505851 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p default-k8s-diff-port-505851 --alsologtostderr -v=1: exit status 80 (1.836789749s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-505851 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 22:05:29.201519  514540 out.go:360] Setting OutFile to fd 1 ...
	I1013 22:05:29.201676  514540 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:05:29.201690  514540 out.go:374] Setting ErrFile to fd 2...
	I1013 22:05:29.201697  514540 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:05:29.201976  514540 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-226873/.minikube/bin
	I1013 22:05:29.202329  514540 out.go:368] Setting JSON to false
	I1013 22:05:29.202376  514540 mustload.go:65] Loading cluster: default-k8s-diff-port-505851
	I1013 22:05:29.202733  514540 config.go:182] Loaded profile config "default-k8s-diff-port-505851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:05:29.203195  514540 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-505851 --format={{.State.Status}}
	I1013 22:05:29.221486  514540 host.go:66] Checking if "default-k8s-diff-port-505851" exists ...
	I1013 22:05:29.221772  514540 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 22:05:29.286807  514540 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:77 OomKillDisable:false NGoroutines:85 SystemTime:2025-10-13 22:05:29.275799916 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652166656 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1013 22:05:29.287525  514540 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1758198818-20370/minikube-v1.37.0-1758198818-20370-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1758198818-20370-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-505851 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1013 22:05:29.289548  514540 out.go:179] * Pausing node default-k8s-diff-port-505851 ... 
	I1013 22:05:29.290746  514540 host.go:66] Checking if "default-k8s-diff-port-505851" exists ...
	I1013 22:05:29.291110  514540 ssh_runner.go:195] Run: systemctl --version
	I1013 22:05:29.291155  514540 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-505851
	I1013 22:05:29.311491  514540 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/default-k8s-diff-port-505851/id_rsa Username:docker}
	I1013 22:05:29.419412  514540 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 22:05:29.445385  514540 pause.go:52] kubelet running: true
	I1013 22:05:29.445459  514540 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1013 22:05:29.674137  514540 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1013 22:05:29.674296  514540 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1013 22:05:29.765582  514540 cri.go:89] found id: "62e1fa758a47ee529eab2178badec20856414d8ddeb60f0cc0c72ffdb14dc220"
	I1013 22:05:29.765612  514540 cri.go:89] found id: "47a2f7003ce93cda6369bdcfca70a589ca8b8c7e50b0ec90f8b055885ba36ed6"
	I1013 22:05:29.765618  514540 cri.go:89] found id: "73688ac34163745dfcaf8e03c5c6a54a4c91a87cb7741b6e20dcbece59db29e5"
	I1013 22:05:29.765622  514540 cri.go:89] found id: "56588477c61cdaf31579516f71a44486912511726118d920501dc6964a03af29"
	I1013 22:05:29.765635  514540 cri.go:89] found id: "648d22473246e720757b31210010e94963b26e5ee7e4f4e57448c809e9ec4c59"
	I1013 22:05:29.765640  514540 cri.go:89] found id: "adda782c2ba2a3f6139979f78f26db41eb8daa3211f0cadcb2a7c82193618fea"
	I1013 22:05:29.765644  514540 cri.go:89] found id: "90e2257cdef169aad8152d89754d028b3f47ff10734cdbe1fc2a91ee1d85145e"
	I1013 22:05:29.765647  514540 cri.go:89] found id: "a4123f94280435b49d4a87e687509166fcba7b0fb561e6b74a0f94b565fb9fc7"
	I1013 22:05:29.765651  514540 cri.go:89] found id: "4e42bb1ca9412735b924cae876a0503b479855539f2a50a515e9f235dd2a15ee"
	I1013 22:05:29.765660  514540 cri.go:89] found id: "ae38e1db9769544ad8187b6bca19aaae3cebfcbaec340f2d13559004fffb61c7"
	I1013 22:05:29.765665  514540 cri.go:89] found id: "e976a19b88a83fe02afbf94aefc984bcec5775ad24483eea6e341b91a0ab5470"
	I1013 22:05:29.765669  514540 cri.go:89] found id: ""
	I1013 22:05:29.765749  514540 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 22:05:29.780890  514540 retry.go:31] will retry after 221.295804ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:05:29Z" level=error msg="open /run/runc: no such file or directory"
	I1013 22:05:30.003211  514540 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 22:05:30.018118  514540 pause.go:52] kubelet running: false
	I1013 22:05:30.018188  514540 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1013 22:05:30.202201  514540 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1013 22:05:30.202326  514540 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1013 22:05:30.288566  514540 cri.go:89] found id: "62e1fa758a47ee529eab2178badec20856414d8ddeb60f0cc0c72ffdb14dc220"
	I1013 22:05:30.288595  514540 cri.go:89] found id: "47a2f7003ce93cda6369bdcfca70a589ca8b8c7e50b0ec90f8b055885ba36ed6"
	I1013 22:05:30.288606  514540 cri.go:89] found id: "73688ac34163745dfcaf8e03c5c6a54a4c91a87cb7741b6e20dcbece59db29e5"
	I1013 22:05:30.288614  514540 cri.go:89] found id: "56588477c61cdaf31579516f71a44486912511726118d920501dc6964a03af29"
	I1013 22:05:30.288618  514540 cri.go:89] found id: "648d22473246e720757b31210010e94963b26e5ee7e4f4e57448c809e9ec4c59"
	I1013 22:05:30.288623  514540 cri.go:89] found id: "adda782c2ba2a3f6139979f78f26db41eb8daa3211f0cadcb2a7c82193618fea"
	I1013 22:05:30.288628  514540 cri.go:89] found id: "90e2257cdef169aad8152d89754d028b3f47ff10734cdbe1fc2a91ee1d85145e"
	I1013 22:05:30.288631  514540 cri.go:89] found id: "a4123f94280435b49d4a87e687509166fcba7b0fb561e6b74a0f94b565fb9fc7"
	I1013 22:05:30.288636  514540 cri.go:89] found id: "4e42bb1ca9412735b924cae876a0503b479855539f2a50a515e9f235dd2a15ee"
	I1013 22:05:30.288645  514540 cri.go:89] found id: "ae38e1db9769544ad8187b6bca19aaae3cebfcbaec340f2d13559004fffb61c7"
	I1013 22:05:30.288650  514540 cri.go:89] found id: "e976a19b88a83fe02afbf94aefc984bcec5775ad24483eea6e341b91a0ab5470"
	I1013 22:05:30.288654  514540 cri.go:89] found id: ""
	I1013 22:05:30.288712  514540 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 22:05:30.304504  514540 retry.go:31] will retry after 332.869971ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:05:30Z" level=error msg="open /run/runc: no such file or directory"
	I1013 22:05:30.638126  514540 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 22:05:30.655479  514540 pause.go:52] kubelet running: false
	I1013 22:05:30.655553  514540 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1013 22:05:30.868648  514540 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1013 22:05:30.868748  514540 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1013 22:05:30.958124  514540 cri.go:89] found id: "62e1fa758a47ee529eab2178badec20856414d8ddeb60f0cc0c72ffdb14dc220"
	I1013 22:05:30.958150  514540 cri.go:89] found id: "47a2f7003ce93cda6369bdcfca70a589ca8b8c7e50b0ec90f8b055885ba36ed6"
	I1013 22:05:30.958156  514540 cri.go:89] found id: "73688ac34163745dfcaf8e03c5c6a54a4c91a87cb7741b6e20dcbece59db29e5"
	I1013 22:05:30.958161  514540 cri.go:89] found id: "56588477c61cdaf31579516f71a44486912511726118d920501dc6964a03af29"
	I1013 22:05:30.958165  514540 cri.go:89] found id: "648d22473246e720757b31210010e94963b26e5ee7e4f4e57448c809e9ec4c59"
	I1013 22:05:30.958168  514540 cri.go:89] found id: "adda782c2ba2a3f6139979f78f26db41eb8daa3211f0cadcb2a7c82193618fea"
	I1013 22:05:30.958171  514540 cri.go:89] found id: "90e2257cdef169aad8152d89754d028b3f47ff10734cdbe1fc2a91ee1d85145e"
	I1013 22:05:30.958173  514540 cri.go:89] found id: "a4123f94280435b49d4a87e687509166fcba7b0fb561e6b74a0f94b565fb9fc7"
	I1013 22:05:30.958175  514540 cri.go:89] found id: "4e42bb1ca9412735b924cae876a0503b479855539f2a50a515e9f235dd2a15ee"
	I1013 22:05:30.958186  514540 cri.go:89] found id: "ae38e1db9769544ad8187b6bca19aaae3cebfcbaec340f2d13559004fffb61c7"
	I1013 22:05:30.958189  514540 cri.go:89] found id: "e976a19b88a83fe02afbf94aefc984bcec5775ad24483eea6e341b91a0ab5470"
	I1013 22:05:30.958191  514540 cri.go:89] found id: ""
	I1013 22:05:30.958240  514540 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 22:05:30.976551  514540 out.go:203] 
	W1013 22:05:30.977937  514540 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:05:30Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:05:30Z" level=error msg="open /run/runc: no such file or directory"
	
	W1013 22:05:30.977964  514540 out.go:285] * 
	* 
	W1013 22:05:30.984571  514540 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1013 22:05:30.985910  514540 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p default-k8s-diff-port-505851 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-505851
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-505851:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "25632f4a587b6c4ae3b8b01bcf7b68f7c63c7aa246539e4d409f57af494c93ea",
	        "Created": "2025-10-13T22:03:21.32648793Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 496321,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-13T22:04:27.264926771Z",
	            "FinishedAt": "2025-10-13T22:04:26.306907027Z"
	        },
	        "Image": "sha256:496f866fde942b6f85dc3d23428f27ed11a5cd69b522133c43b8dc97e7575c9e",
	        "ResolvConfPath": "/var/lib/docker/containers/25632f4a587b6c4ae3b8b01bcf7b68f7c63c7aa246539e4d409f57af494c93ea/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/25632f4a587b6c4ae3b8b01bcf7b68f7c63c7aa246539e4d409f57af494c93ea/hostname",
	        "HostsPath": "/var/lib/docker/containers/25632f4a587b6c4ae3b8b01bcf7b68f7c63c7aa246539e4d409f57af494c93ea/hosts",
	        "LogPath": "/var/lib/docker/containers/25632f4a587b6c4ae3b8b01bcf7b68f7c63c7aa246539e4d409f57af494c93ea/25632f4a587b6c4ae3b8b01bcf7b68f7c63c7aa246539e4d409f57af494c93ea-json.log",
	        "Name": "/default-k8s-diff-port-505851",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-505851:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-505851",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "25632f4a587b6c4ae3b8b01bcf7b68f7c63c7aa246539e4d409f57af494c93ea",
	                "LowerDir": "/var/lib/docker/overlay2/6b2a262bb341241a8ef07d2e0e2f1e5a0bf23a58ce55acefa3a22c4f42e20d7b-init/diff:/var/lib/docker/overlay2/d6236b573fee274727d414fe7dfb0718c2f1a4b8ebed995b4196b3231a8d31a4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6b2a262bb341241a8ef07d2e0e2f1e5a0bf23a58ce55acefa3a22c4f42e20d7b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6b2a262bb341241a8ef07d2e0e2f1e5a0bf23a58ce55acefa3a22c4f42e20d7b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6b2a262bb341241a8ef07d2e0e2f1e5a0bf23a58ce55acefa3a22c4f42e20d7b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-505851",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-505851/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-505851",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-505851",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-505851",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b392a435e3d34a1ae8ae6d1c0a26da3f0ee9cd91541afcb3c83dd2102371e080",
	            "SandboxKey": "/var/run/docker/netns/b392a435e3d3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-505851": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ea:2a:d0:7c:24:a2",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "bd127c16ad9414037a41fda45a58cf82e4113c81cfa569a1b9f2b3db8c366a7a",
	                    "EndpointID": "ec7bdda28dc0a1b46e50c076a2795a88c9bec2e6757da79bb91b6a482bbfebf0",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-505851",
	                        "25632f4a587b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-505851 -n default-k8s-diff-port-505851
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-505851 -n default-k8s-diff-port-505851: exit status 2 (363.793221ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-505851 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-505851 logs -n 25: (1.479225327s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                          ARGS                                                                          │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p auto-200102 sudo journalctl -xeu kubelet --all --full --no-pager                                                                                    │ auto-200102                  │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │ 13 Oct 25 22:04 UTC │
	│ start   │ -p embed-certs-521669 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ embed-certs-521669           │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │                     │
	│ ssh     │ -p auto-200102 sudo cat /var/lib/kubelet/config.yaml                                                                                                   │ auto-200102                  │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │ 13 Oct 25 22:04 UTC │
	│ ssh     │ -p auto-200102 sudo systemctl status docker --all --full --no-pager                                                                                    │ auto-200102                  │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │                     │
	│ ssh     │ -p auto-200102 sudo systemctl cat docker --no-pager                                                                                                    │ auto-200102                  │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │ 13 Oct 25 22:04 UTC │
	│ ssh     │ -p auto-200102 sudo cat /etc/docker/daemon.json                                                                                                        │ auto-200102                  │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │                     │
	│ ssh     │ -p auto-200102 sudo docker system info                                                                                                                 │ auto-200102                  │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │                     │
	│ ssh     │ -p auto-200102 sudo systemctl status cri-docker --all --full --no-pager                                                                                │ auto-200102                  │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │                     │
	│ ssh     │ -p auto-200102 sudo systemctl cat cri-docker --no-pager                                                                                                │ auto-200102                  │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │ 13 Oct 25 22:04 UTC │
	│ ssh     │ -p auto-200102 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                           │ auto-200102                  │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │                     │
	│ ssh     │ -p auto-200102 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                     │ auto-200102                  │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │ 13 Oct 25 22:04 UTC │
	│ ssh     │ -p auto-200102 sudo cri-dockerd --version                                                                                                              │ auto-200102                  │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │ 13 Oct 25 22:04 UTC │
	│ ssh     │ -p auto-200102 sudo systemctl status containerd --all --full --no-pager                                                                                │ auto-200102                  │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │                     │
	│ ssh     │ -p auto-200102 sudo systemctl cat containerd --no-pager                                                                                                │ auto-200102                  │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │ 13 Oct 25 22:04 UTC │
	│ ssh     │ -p auto-200102 sudo cat /lib/systemd/system/containerd.service                                                                                         │ auto-200102                  │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │ 13 Oct 25 22:04 UTC │
	│ ssh     │ -p auto-200102 sudo cat /etc/containerd/config.toml                                                                                                    │ auto-200102                  │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │ 13 Oct 25 22:04 UTC │
	│ ssh     │ -p auto-200102 sudo containerd config dump                                                                                                             │ auto-200102                  │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │ 13 Oct 25 22:04 UTC │
	│ ssh     │ -p auto-200102 sudo systemctl status crio --all --full --no-pager                                                                                      │ auto-200102                  │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │ 13 Oct 25 22:04 UTC │
	│ ssh     │ -p auto-200102 sudo systemctl cat crio --no-pager                                                                                                      │ auto-200102                  │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │ 13 Oct 25 22:04 UTC │
	│ ssh     │ -p auto-200102 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                            │ auto-200102                  │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │ 13 Oct 25 22:04 UTC │
	│ ssh     │ -p auto-200102 sudo crio config                                                                                                                        │ auto-200102                  │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │ 13 Oct 25 22:04 UTC │
	│ delete  │ -p auto-200102                                                                                                                                         │ auto-200102                  │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │ 13 Oct 25 22:04 UTC │
	│ start   │ -p calico-200102 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio                 │ calico-200102                │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │                     │
	│ image   │ default-k8s-diff-port-505851 image list --format=json                                                                                                  │ default-k8s-diff-port-505851 │ jenkins │ v1.37.0 │ 13 Oct 25 22:05 UTC │ 13 Oct 25 22:05 UTC │
	│ pause   │ -p default-k8s-diff-port-505851 --alsologtostderr -v=1                                                                                                 │ default-k8s-diff-port-505851 │ jenkins │ v1.37.0 │ 13 Oct 25 22:05 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 22:04:57
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 22:04:57.600029  510068 out.go:360] Setting OutFile to fd 1 ...
	I1013 22:04:57.600363  510068 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:04:57.600376  510068 out.go:374] Setting ErrFile to fd 2...
	I1013 22:04:57.600383  510068 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:04:57.600686  510068 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-226873/.minikube/bin
	I1013 22:04:57.601373  510068 out.go:368] Setting JSON to false
	I1013 22:04:57.603077  510068 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6446,"bootTime":1760386652,"procs":345,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1013 22:04:57.603224  510068 start.go:141] virtualization: kvm guest
	I1013 22:04:57.605562  510068 out.go:179] * [calico-200102] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1013 22:04:57.606924  510068 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 22:04:57.606962  510068 notify.go:220] Checking for updates...
	I1013 22:04:57.609488  510068 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 22:04:57.611204  510068 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-226873/kubeconfig
	I1013 22:04:57.612626  510068 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-226873/.minikube
	I1013 22:04:57.614039  510068 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1013 22:04:57.615575  510068 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 22:04:57.617486  510068 config.go:182] Loaded profile config "default-k8s-diff-port-505851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:04:57.617584  510068 config.go:182] Loaded profile config "embed-certs-521669": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:04:57.617657  510068 config.go:182] Loaded profile config "kindnet-200102": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:04:57.617767  510068 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 22:04:57.661729  510068 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1013 22:04:57.661828  510068 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 22:04:57.735632  510068 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-13 22:04:57.720460503 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652166656 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1013 22:04:57.735770  510068 docker.go:318] overlay module found
	I1013 22:04:57.737869  510068 out.go:179] * Using the docker driver based on user configuration
	I1013 22:04:57.740052  510068 start.go:305] selected driver: docker
	I1013 22:04:57.740074  510068 start.go:925] validating driver "docker" against <nil>
	I1013 22:04:57.740090  510068 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 22:04:57.740774  510068 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 22:04:57.851378  510068 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-13 22:04:57.831976045 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652166656 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1013 22:04:57.851597  510068 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1013 22:04:57.851888  510068 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 22:04:57.853765  510068 out.go:179] * Using Docker driver with root privileges
	I1013 22:04:57.855220  510068 cni.go:84] Creating CNI manager for "calico"
	I1013 22:04:57.855243  510068 start_flags.go:336] Found "Calico" CNI - setting NetworkPlugin=cni
	I1013 22:04:57.855333  510068 start.go:349] cluster config:
	{Name:calico-200102 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-200102 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s}
	I1013 22:04:57.857287  510068 out.go:179] * Starting "calico-200102" primary control-plane node in "calico-200102" cluster
	I1013 22:04:57.858749  510068 cache.go:123] Beginning downloading kic base image for docker with crio
	I1013 22:04:57.860390  510068 out.go:179] * Pulling base image v0.0.48-1760363564-21724 ...
	I1013 22:04:57.574938  501664 out.go:252]   - Booting up control plane ...
	I1013 22:04:57.575078  501664 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1013 22:04:57.575169  501664 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1013 22:04:57.575758  501664 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1013 22:04:57.593841  501664 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1013 22:04:57.593966  501664 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1013 22:04:57.601352  501664 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1013 22:04:57.601610  501664 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1013 22:04:57.601769  501664 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1013 22:04:57.727000  501664 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1013 22:04:57.727199  501664 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1013 22:04:57.861788  510068 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 22:04:57.861835  510068 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-226873/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1013 22:04:57.861853  510068 cache.go:58] Caching tarball of preloaded images
	I1013 22:04:57.861910  510068 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon
	I1013 22:04:57.861985  510068 preload.go:233] Found /home/jenkins/minikube-integration/21724-226873/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1013 22:04:57.862009  510068 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1013 22:04:57.862127  510068 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/calico-200102/config.json ...
	I1013 22:04:57.862151  510068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/calico-200102/config.json: {Name:mkdc0a5acfca7b93aaa4869933063bc1ca23a4a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:04:57.886320  510068 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon, skipping pull
	I1013 22:04:57.886366  510068 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 exists in daemon, skipping load
	I1013 22:04:57.886389  510068 cache.go:232] Successfully downloaded all kic artifacts
	I1013 22:04:57.886417  510068 start.go:360] acquireMachinesLock for calico-200102: {Name:mk9e164b65ac945058cc8fdff0a6b7c974929130 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 22:04:57.886526  510068 start.go:364] duration metric: took 89.282µs to acquireMachinesLock for "calico-200102"
	I1013 22:04:57.886562  510068 start.go:93] Provisioning new machine with config: &{Name:calico-200102 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-200102 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1013 22:04:57.886665  510068 start.go:125] createHost starting for "" (driver="docker")
	I1013 22:04:56.138079  505109 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1013 22:04:56.138105  505109 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1013 22:04:56.138174  505109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-521669
	I1013 22:04:56.173583  505109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/embed-certs-521669/id_rsa Username:docker}
	I1013 22:04:56.175158  505109 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1013 22:04:56.175245  505109 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1013 22:04:56.175343  505109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-521669
	I1013 22:04:56.177076  505109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/embed-certs-521669/id_rsa Username:docker}
	I1013 22:04:56.205368  505109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/embed-certs-521669/id_rsa Username:docker}
	I1013 22:04:56.269491  505109 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 22:04:56.285560  505109 node_ready.go:35] waiting up to 6m0s for node "embed-certs-521669" to be "Ready" ...
	I1013 22:04:56.298084  505109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 22:04:56.299573  505109 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1013 22:04:56.299594  505109 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1013 22:04:56.325199  505109 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1013 22:04:56.325227  505109 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1013 22:04:56.332701  505109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1013 22:04:56.350946  505109 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1013 22:04:56.350987  505109 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1013 22:04:56.374977  505109 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1013 22:04:56.375047  505109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1013 22:04:56.395083  505109 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1013 22:04:56.395113  505109 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1013 22:04:56.413947  505109 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1013 22:04:56.413978  505109 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1013 22:04:56.434377  505109 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1013 22:04:56.434404  505109 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1013 22:04:56.461625  505109 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1013 22:04:56.461670  505109 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1013 22:04:56.485455  505109 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1013 22:04:56.485486  505109 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1013 22:04:56.505644  505109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1013 22:04:57.933854  505109 node_ready.go:49] node "embed-certs-521669" is "Ready"
	I1013 22:04:57.933895  505109 node_ready.go:38] duration metric: took 1.64829365s for node "embed-certs-521669" to be "Ready" ...
	I1013 22:04:57.933912  505109 api_server.go:52] waiting for apiserver process to appear ...
	I1013 22:04:57.933963  505109 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 22:04:58.556759  505109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.258633174s)
	I1013 22:04:58.556772  505109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.2239942s)
	I1013 22:04:58.557101  505109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.051390315s)
	I1013 22:04:58.557115  505109 api_server.go:72] duration metric: took 2.4551482s to wait for apiserver process to appear ...
	I1013 22:04:58.557132  505109 api_server.go:88] waiting for apiserver healthz status ...
	I1013 22:04:58.557152  505109 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1013 22:04:58.561933  505109 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-521669 addons enable metrics-server
	
	I1013 22:04:58.565714  505109 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1013 22:04:58.565742  505109 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1013 22:04:58.573259  505109 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1013 22:04:57.965374  496036 pod_ready.go:104] pod "coredns-66bc5c9577-5x8dn" is not "Ready", error: <nil>
	W1013 22:05:00.457661  496036 pod_ready.go:104] pod "coredns-66bc5c9577-5x8dn" is not "Ready", error: <nil>
	I1013 22:04:57.893491  510068 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1013 22:04:57.893863  510068 start.go:159] libmachine.API.Create for "calico-200102" (driver="docker")
	I1013 22:04:57.893898  510068 client.go:168] LocalClient.Create starting
	I1013 22:04:57.894004  510068 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem
	I1013 22:04:57.894047  510068 main.go:141] libmachine: Decoding PEM data...
	I1013 22:04:57.894069  510068 main.go:141] libmachine: Parsing certificate...
	I1013 22:04:57.894139  510068 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21724-226873/.minikube/certs/cert.pem
	I1013 22:04:57.894163  510068 main.go:141] libmachine: Decoding PEM data...
	I1013 22:04:57.894174  510068 main.go:141] libmachine: Parsing certificate...
	I1013 22:04:57.894618  510068 cli_runner.go:164] Run: docker network inspect calico-200102 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1013 22:04:57.932742  510068 cli_runner.go:211] docker network inspect calico-200102 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1013 22:04:57.932943  510068 network_create.go:284] running [docker network inspect calico-200102] to gather additional debugging logs...
	I1013 22:04:57.933011  510068 cli_runner.go:164] Run: docker network inspect calico-200102
	W1013 22:04:57.966457  510068 cli_runner.go:211] docker network inspect calico-200102 returned with exit code 1
	I1013 22:04:57.966488  510068 network_create.go:287] error running [docker network inspect calico-200102]: docker network inspect calico-200102: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network calico-200102 not found
	I1013 22:04:57.966612  510068 network_create.go:289] output of [docker network inspect calico-200102]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network calico-200102 not found
	
	** /stderr **
	I1013 22:04:57.966792  510068 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 22:04:57.993632  510068 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7d83a8e6a805 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ea:69:47:54:f9:98} reservation:<nil>}
	I1013 22:04:57.994725  510068 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-35c0cecee577 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:f2:41:bc:f8:12:32} reservation:<nil>}
	I1013 22:04:57.995703  510068 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-2e951fbeb08e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ba:fb:be:51:da:97} reservation:<nil>}
	I1013 22:04:57.996510  510068 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-bd127c16ad94 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:da:91:d2:e9:26:c1} reservation:<nil>}
	I1013 22:04:57.997762  510068 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f86150}
	I1013 22:04:57.997836  510068 network_create.go:124] attempt to create docker network calico-200102 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1013 22:04:57.997934  510068 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-200102 calico-200102
	I1013 22:04:58.086657  510068 network_create.go:108] docker network calico-200102 192.168.85.0/24 created
	I1013 22:04:58.086806  510068 kic.go:121] calculated static IP "192.168.85.2" for the "calico-200102" container
	I1013 22:04:58.086964  510068 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1013 22:04:58.111796  510068 cli_runner.go:164] Run: docker volume create calico-200102 --label name.minikube.sigs.k8s.io=calico-200102 --label created_by.minikube.sigs.k8s.io=true
	I1013 22:04:58.138960  510068 oci.go:103] Successfully created a docker volume calico-200102
	I1013 22:04:58.139068  510068 cli_runner.go:164] Run: docker run --rm --name calico-200102-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-200102 --entrypoint /usr/bin/test -v calico-200102:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -d /var/lib
	I1013 22:04:58.653539  510068 oci.go:107] Successfully prepared a docker volume calico-200102
	I1013 22:04:58.653589  510068 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 22:04:58.653613  510068 kic.go:194] Starting extracting preloaded images to volume ...
	I1013 22:04:58.653677  510068 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21724-226873/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v calico-200102:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -I lz4 -xf /preloaded.tar -C /extractDir
	I1013 22:04:58.229500  501664 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 502.536221ms
	I1013 22:04:58.235372  501664 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1013 22:04:58.235498  501664 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1013 22:04:58.235614  501664 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1013 22:04:58.235712  501664 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1013 22:04:59.754874  501664 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.519620412s
	I1013 22:05:00.927012  501664 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.691843323s
	I1013 22:04:58.574864  505109 addons.go:514] duration metric: took 2.47282637s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1013 22:04:59.058167  505109 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1013 22:04:59.064872  505109 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1013 22:04:59.064902  505109 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1013 22:04:59.558137  505109 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1013 22:04:59.563643  505109 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1013 22:04:59.564836  505109 api_server.go:141] control plane version: v1.34.1
	I1013 22:04:59.564861  505109 api_server.go:131] duration metric: took 1.00772189s to wait for apiserver health ...
	I1013 22:04:59.564872  505109 system_pods.go:43] waiting for kube-system pods to appear ...
	I1013 22:04:59.568145  505109 system_pods.go:59] 8 kube-system pods found
	I1013 22:04:59.568188  505109 system_pods.go:61] "coredns-66bc5c9577-kzq9t" [de4a6bd9-ffde-4056-a47b-41dd5db09e0f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 22:04:59.568199  505109 system_pods.go:61] "etcd-embed-certs-521669" [cef194ff-ec06-48fc-8b99-a25838ea9dd8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1013 22:04:59.568206  505109 system_pods.go:61] "kindnet-rqr6b" [83ca9459-7636-4391-814b-274ff7e06bc7] Running
	I1013 22:04:59.568215  505109 system_pods.go:61] "kube-apiserver-embed-certs-521669" [80c8fec4-c979-4c91-a725-ee41f5f0aab0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1013 22:04:59.568229  505109 system_pods.go:61] "kube-controller-manager-embed-certs-521669" [326549fc-7a4b-4837-959d-eaa1c069b89a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1013 22:04:59.568234  505109 system_pods.go:61] "kube-proxy-jjzrs" [511ca726-6516-4c5b-8bb4-f76d6e83ef94] Running
	I1013 22:04:59.568243  505109 system_pods.go:61] "kube-scheduler-embed-certs-521669" [d91d80ae-c8fe-4eaf-b383-05b7202992d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1013 22:04:59.568253  505109 system_pods.go:61] "storage-provisioner" [9c70ca0c-a52a-43a0-8221-0c1ecd43c72a] Running
	I1013 22:04:59.568261  505109 system_pods.go:74] duration metric: took 3.38231ms to wait for pod list to return data ...
	I1013 22:04:59.568270  505109 default_sa.go:34] waiting for default service account to be created ...
	I1013 22:04:59.572390  505109 default_sa.go:45] found service account: "default"
	I1013 22:04:59.572426  505109 default_sa.go:55] duration metric: took 4.148403ms for default service account to be created ...
	I1013 22:04:59.572439  505109 system_pods.go:116] waiting for k8s-apps to be running ...
	I1013 22:04:59.575494  505109 system_pods.go:86] 8 kube-system pods found
	I1013 22:04:59.575530  505109 system_pods.go:89] "coredns-66bc5c9577-kzq9t" [de4a6bd9-ffde-4056-a47b-41dd5db09e0f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 22:04:59.575540  505109 system_pods.go:89] "etcd-embed-certs-521669" [cef194ff-ec06-48fc-8b99-a25838ea9dd8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1013 22:04:59.575547  505109 system_pods.go:89] "kindnet-rqr6b" [83ca9459-7636-4391-814b-274ff7e06bc7] Running
	I1013 22:04:59.575556  505109 system_pods.go:89] "kube-apiserver-embed-certs-521669" [80c8fec4-c979-4c91-a725-ee41f5f0aab0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1013 22:04:59.575564  505109 system_pods.go:89] "kube-controller-manager-embed-certs-521669" [326549fc-7a4b-4837-959d-eaa1c069b89a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1013 22:04:59.575573  505109 system_pods.go:89] "kube-proxy-jjzrs" [511ca726-6516-4c5b-8bb4-f76d6e83ef94] Running
	I1013 22:04:59.575581  505109 system_pods.go:89] "kube-scheduler-embed-certs-521669" [d91d80ae-c8fe-4eaf-b383-05b7202992d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1013 22:04:59.575593  505109 system_pods.go:89] "storage-provisioner" [9c70ca0c-a52a-43a0-8221-0c1ecd43c72a] Running
	I1013 22:04:59.575602  505109 system_pods.go:126] duration metric: took 3.157141ms to wait for k8s-apps to be running ...
	I1013 22:04:59.575614  505109 system_svc.go:44] waiting for kubelet service to be running ....
	I1013 22:04:59.575666  505109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 22:04:59.592023  505109 system_svc.go:56] duration metric: took 16.397591ms WaitForService to wait for kubelet
	I1013 22:04:59.592060  505109 kubeadm.go:586] duration metric: took 3.49009411s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 22:04:59.592084  505109 node_conditions.go:102] verifying NodePressure condition ...
	I1013 22:04:59.595686  505109 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1013 22:04:59.595717  505109 node_conditions.go:123] node cpu capacity is 8
	I1013 22:04:59.595736  505109 node_conditions.go:105] duration metric: took 3.643199ms to run NodePressure ...
	I1013 22:04:59.595750  505109 start.go:241] waiting for startup goroutines ...
	I1013 22:04:59.595759  505109 start.go:246] waiting for cluster config update ...
	I1013 22:04:59.595775  505109 start.go:255] writing updated cluster config ...
	I1013 22:04:59.596088  505109 ssh_runner.go:195] Run: rm -f paused
	I1013 22:04:59.601136  505109 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 22:04:59.605173  505109 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-kzq9t" in "kube-system" namespace to be "Ready" or be gone ...
	W1013 22:05:01.620658  505109 pod_ready.go:104] pod "coredns-66bc5c9577-kzq9t" is not "Ready", error: <nil>
	I1013 22:05:04.737279  501664 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.501973484s
	I1013 22:05:04.751425  501664 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1013 22:05:04.766118  501664 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1013 22:05:04.783549  501664 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1013 22:05:04.783841  501664 kubeadm.go:318] [mark-control-plane] Marking the node kindnet-200102 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1013 22:05:04.794476  501664 kubeadm.go:318] [bootstrap-token] Using token: oz3cya.f6fitoruhpb1tvw4
	I1013 22:05:04.796491  501664 out.go:252]   - Configuring RBAC rules ...
	I1013 22:05:04.796630  501664 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1013 22:05:04.800695  501664 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1013 22:05:04.808046  501664 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1013 22:05:04.811159  501664 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1013 22:05:04.814269  501664 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1013 22:05:04.818421  501664 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1013 22:05:05.144916  501664 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1013 22:05:05.721680  501664 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1013 22:05:06.145738  501664 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1013 22:05:06.146653  501664 kubeadm.go:318] 
	I1013 22:05:06.146773  501664 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1013 22:05:06.146787  501664 kubeadm.go:318] 
	I1013 22:05:06.146894  501664 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1013 22:05:06.146917  501664 kubeadm.go:318] 
	I1013 22:05:06.146957  501664 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1013 22:05:06.147128  501664 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1013 22:05:06.147218  501664 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1013 22:05:06.147229  501664 kubeadm.go:318] 
	I1013 22:05:06.147331  501664 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1013 22:05:06.147347  501664 kubeadm.go:318] 
	I1013 22:05:06.147405  501664 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1013 22:05:06.147540  501664 kubeadm.go:318] 
	I1013 22:05:06.147626  501664 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1013 22:05:06.147753  501664 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1013 22:05:06.147870  501664 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1013 22:05:06.147888  501664 kubeadm.go:318] 
	I1013 22:05:06.148036  501664 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1013 22:05:06.148107  501664 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1013 22:05:06.148113  501664 kubeadm.go:318] 
	I1013 22:05:06.148186  501664 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token oz3cya.f6fitoruhpb1tvw4 \
	I1013 22:05:06.148297  501664 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:00bf5d6d0f4ef7bd9334b23e90d8fd2b7e452995fe6a96d4fd7aebd6540a9956 \
	I1013 22:05:06.148336  501664 kubeadm.go:318] 	--control-plane 
	I1013 22:05:06.148345  501664 kubeadm.go:318] 
	I1013 22:05:06.148479  501664 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1013 22:05:06.148496  501664 kubeadm.go:318] 
	I1013 22:05:06.148598  501664 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token oz3cya.f6fitoruhpb1tvw4 \
	I1013 22:05:06.148755  501664 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:00bf5d6d0f4ef7bd9334b23e90d8fd2b7e452995fe6a96d4fd7aebd6540a9956 
	I1013 22:05:06.151600  501664 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1013 22:05:06.151768  501664 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1013 22:05:06.151799  501664 cni.go:84] Creating CNI manager for "kindnet"
	I1013 22:05:06.153343  501664 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1013 22:05:02.958248  496036 pod_ready.go:104] pod "coredns-66bc5c9577-5x8dn" is not "Ready", error: <nil>
	W1013 22:05:05.460117  496036 pod_ready.go:104] pod "coredns-66bc5c9577-5x8dn" is not "Ready", error: <nil>
	I1013 22:05:03.680421  510068 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21724-226873/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v calico-200102:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -I lz4 -xf /preloaded.tar -C /extractDir: (5.026701488s)
	I1013 22:05:03.680463  510068 kic.go:203] duration metric: took 5.026844342s to extract preloaded images to volume ...
	W1013 22:05:03.680576  510068 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1013 22:05:03.680611  510068 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1013 22:05:03.680664  510068 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1013 22:05:03.769831  510068 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-200102 --name calico-200102 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-200102 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-200102 --network calico-200102 --ip 192.168.85.2 --volume calico-200102:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225
	I1013 22:05:04.202138  510068 cli_runner.go:164] Run: docker container inspect calico-200102 --format={{.State.Running}}
	I1013 22:05:04.228646  510068 cli_runner.go:164] Run: docker container inspect calico-200102 --format={{.State.Status}}
	I1013 22:05:04.257138  510068 cli_runner.go:164] Run: docker exec calico-200102 stat /var/lib/dpkg/alternatives/iptables
	I1013 22:05:04.315847  510068 oci.go:144] the created container "calico-200102" has a running status.
	I1013 22:05:04.316069  510068 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21724-226873/.minikube/machines/calico-200102/id_rsa...
	I1013 22:05:04.559979  510068 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21724-226873/.minikube/machines/calico-200102/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1013 22:05:04.592619  510068 cli_runner.go:164] Run: docker container inspect calico-200102 --format={{.State.Status}}
	I1013 22:05:04.616563  510068 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1013 22:05:04.616586  510068 kic_runner.go:114] Args: [docker exec --privileged calico-200102 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1013 22:05:04.669410  510068 cli_runner.go:164] Run: docker container inspect calico-200102 --format={{.State.Status}}
	I1013 22:05:04.691271  510068 machine.go:93] provisionDockerMachine start ...
	I1013 22:05:04.691468  510068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-200102
	I1013 22:05:04.714849  510068 main.go:141] libmachine: Using SSH client type: native
	I1013 22:05:04.715219  510068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1013 22:05:04.715240  510068 main.go:141] libmachine: About to run SSH command:
	hostname
	I1013 22:05:04.715987  510068 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59950->127.0.0.1:33113: read: connection reset by peer
	I1013 22:05:06.154402  501664 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1013 22:05:06.159850  501664 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1013 22:05:06.159868  501664 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1013 22:05:06.175311  501664 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1013 22:05:06.429809  501664 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1013 22:05:06.429943  501664 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:05:06.429979  501664 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kindnet-200102 minikube.k8s.io/updated_at=2025_10_13T22_05_06_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=18273cc699fc357fbf4f93654efe4966698a9f22 minikube.k8s.io/name=kindnet-200102 minikube.k8s.io/primary=true
	I1013 22:05:06.443204  501664 ops.go:34] apiserver oom_adj: -16
	I1013 22:05:06.532903  501664 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:05:07.033717  501664 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:05:07.533552  501664 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1013 22:05:03.648769  505109 pod_ready.go:104] pod "coredns-66bc5c9577-kzq9t" is not "Ready", error: <nil>
	W1013 22:05:06.111651  505109 pod_ready.go:104] pod "coredns-66bc5c9577-kzq9t" is not "Ready", error: <nil>
	W1013 22:05:08.113860  505109 pod_ready.go:104] pod "coredns-66bc5c9577-kzq9t" is not "Ready", error: <nil>
	I1013 22:05:08.033818  501664 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:05:08.533976  501664 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:05:09.033818  501664 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:05:09.533800  501664 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:05:10.033708  501664 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:05:10.533605  501664 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:05:11.033122  501664 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:05:11.117150  501664 kubeadm.go:1113] duration metric: took 4.687327675s to wait for elevateKubeSystemPrivileges
	I1013 22:05:11.117196  501664 kubeadm.go:402] duration metric: took 19.137678587s to StartCluster
	I1013 22:05:11.117220  501664 settings.go:142] acquiring lock: {Name:mk13008e3b2fce0e368bddbf00d43b8340210d33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:05:11.117303  501664 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-226873/kubeconfig
	I1013 22:05:11.119575  501664 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/kubeconfig: {Name:mk2f336b13d09ff6e6da9e86905651541ce51ca0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:05:11.119920  501664 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1013 22:05:11.120076  501664 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1013 22:05:11.120248  501664 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1013 22:05:11.120356  501664 config.go:182] Loaded profile config "kindnet-200102": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:05:11.120361  501664 addons.go:69] Setting storage-provisioner=true in profile "kindnet-200102"
	I1013 22:05:11.120380  501664 addons.go:238] Setting addon storage-provisioner=true in "kindnet-200102"
	I1013 22:05:11.120406  501664 addons.go:69] Setting default-storageclass=true in profile "kindnet-200102"
	I1013 22:05:11.120413  501664 host.go:66] Checking if "kindnet-200102" exists ...
	I1013 22:05:11.120426  501664 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kindnet-200102"
	I1013 22:05:11.120816  501664 cli_runner.go:164] Run: docker container inspect kindnet-200102 --format={{.State.Status}}
	I1013 22:05:11.120984  501664 cli_runner.go:164] Run: docker container inspect kindnet-200102 --format={{.State.Status}}
	I1013 22:05:11.121297  501664 out.go:179] * Verifying Kubernetes components...
	I1013 22:05:11.123542  501664 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:05:11.153698  501664 addons.go:238] Setting addon default-storageclass=true in "kindnet-200102"
	I1013 22:05:11.153749  501664 host.go:66] Checking if "kindnet-200102" exists ...
	I1013 22:05:11.154230  501664 cli_runner.go:164] Run: docker container inspect kindnet-200102 --format={{.State.Status}}
	I1013 22:05:11.160303  501664 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1013 22:05:11.178414  501664 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1013 22:05:11.178447  501664 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1013 22:05:11.178515  501664 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-200102
	I1013 22:05:11.183749  501664 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 22:05:11.183777  501664 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1013 22:05:11.183842  501664 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-200102
	I1013 22:05:11.211405  501664 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/kindnet-200102/id_rsa Username:docker}
	I1013 22:05:11.216460  501664 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/kindnet-200102/id_rsa Username:docker}
	I1013 22:05:11.289434  501664 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1013 22:05:11.299130  501664 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 22:05:11.330360  501664 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1013 22:05:11.342206  501664 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 22:05:11.551930  501664 start.go:976] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1013 22:05:11.553616  501664 node_ready.go:35] waiting up to 15m0s for node "kindnet-200102" to be "Ready" ...
	I1013 22:05:11.811570  501664 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	W1013 22:05:07.959098  496036 pod_ready.go:104] pod "coredns-66bc5c9577-5x8dn" is not "Ready", error: <nil>
	W1013 22:05:10.458521  496036 pod_ready.go:104] pod "coredns-66bc5c9577-5x8dn" is not "Ready", error: <nil>
	I1013 22:05:07.873540  510068 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-200102
	
	I1013 22:05:07.873574  510068 ubuntu.go:182] provisioning hostname "calico-200102"
	I1013 22:05:07.873669  510068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-200102
	I1013 22:05:07.897536  510068 main.go:141] libmachine: Using SSH client type: native
	I1013 22:05:07.897835  510068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1013 22:05:07.897857  510068 main.go:141] libmachine: About to run SSH command:
	sudo hostname calico-200102 && echo "calico-200102" | sudo tee /etc/hostname
	I1013 22:05:08.066880  510068 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-200102
	
	I1013 22:05:08.067002  510068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-200102
	I1013 22:05:08.090660  510068 main.go:141] libmachine: Using SSH client type: native
	I1013 22:05:08.091020  510068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1013 22:05:08.091052  510068 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-200102' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-200102/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-200102' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1013 22:05:08.247648  510068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 22:05:08.247751  510068 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21724-226873/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-226873/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-226873/.minikube}
	I1013 22:05:08.247803  510068 ubuntu.go:190] setting up certificates
	I1013 22:05:08.247816  510068 provision.go:84] configureAuth start
	I1013 22:05:08.247882  510068 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-200102
	I1013 22:05:08.268038  510068 provision.go:143] copyHostCerts
	I1013 22:05:08.268142  510068 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-226873/.minikube/ca.pem, removing ...
	I1013 22:05:08.268156  510068 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-226873/.minikube/ca.pem
	I1013 22:05:08.268249  510068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-226873/.minikube/ca.pem (1078 bytes)
	I1013 22:05:08.268390  510068 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-226873/.minikube/cert.pem, removing ...
	I1013 22:05:08.268404  510068 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-226873/.minikube/cert.pem
	I1013 22:05:08.268451  510068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-226873/.minikube/cert.pem (1123 bytes)
	I1013 22:05:08.268547  510068 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-226873/.minikube/key.pem, removing ...
	I1013 22:05:08.268561  510068 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-226873/.minikube/key.pem
	I1013 22:05:08.268601  510068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-226873/.minikube/key.pem (1679 bytes)
	I1013 22:05:08.268763  510068 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-226873/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca-key.pem org=jenkins.calico-200102 san=[127.0.0.1 192.168.85.2 calico-200102 localhost minikube]
	I1013 22:05:09.222116  510068 provision.go:177] copyRemoteCerts
	I1013 22:05:09.222210  510068 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1013 22:05:09.222264  510068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-200102
	I1013 22:05:09.245308  510068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/calico-200102/id_rsa Username:docker}
	I1013 22:05:09.358834  510068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1013 22:05:09.386380  510068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1013 22:05:09.410631  510068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1013 22:05:09.435966  510068 provision.go:87] duration metric: took 1.188131204s to configureAuth
	I1013 22:05:09.436012  510068 ubuntu.go:206] setting minikube options for container-runtime
	I1013 22:05:09.436221  510068 config.go:182] Loaded profile config "calico-200102": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:05:09.436360  510068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-200102
	I1013 22:05:09.462663  510068 main.go:141] libmachine: Using SSH client type: native
	I1013 22:05:09.462968  510068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1013 22:05:09.463026  510068 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1013 22:05:09.750125  510068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1013 22:05:09.750153  510068 machine.go:96] duration metric: took 5.058859508s to provisionDockerMachine
	I1013 22:05:09.750167  510068 client.go:171] duration metric: took 11.856262633s to LocalClient.Create
	I1013 22:05:09.750191  510068 start.go:167] duration metric: took 11.856329479s to libmachine.API.Create "calico-200102"
	I1013 22:05:09.750204  510068 start.go:293] postStartSetup for "calico-200102" (driver="docker")
	I1013 22:05:09.750218  510068 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1013 22:05:09.750291  510068 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1013 22:05:09.750357  510068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-200102
	I1013 22:05:09.770290  510068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/calico-200102/id_rsa Username:docker}
	I1013 22:05:09.882791  510068 ssh_runner.go:195] Run: cat /etc/os-release
	I1013 22:05:09.887101  510068 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1013 22:05:09.887130  510068 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1013 22:05:09.887144  510068 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-226873/.minikube/addons for local assets ...
	I1013 22:05:09.887199  510068 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-226873/.minikube/files for local assets ...
	I1013 22:05:09.887291  510068 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-226873/.minikube/files/etc/ssl/certs/2309292.pem -> 2309292.pem in /etc/ssl/certs
	I1013 22:05:09.887409  510068 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1013 22:05:09.895918  510068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/files/etc/ssl/certs/2309292.pem --> /etc/ssl/certs/2309292.pem (1708 bytes)
	I1013 22:05:09.923938  510068 start.go:296] duration metric: took 173.715275ms for postStartSetup
	I1013 22:05:09.924415  510068 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-200102
	I1013 22:05:09.943419  510068 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/calico-200102/config.json ...
	I1013 22:05:09.943827  510068 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1013 22:05:09.943885  510068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-200102
	I1013 22:05:09.964219  510068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/calico-200102/id_rsa Username:docker}
	I1013 22:05:10.063280  510068 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1013 22:05:10.068549  510068 start.go:128] duration metric: took 12.181865397s to createHost
	I1013 22:05:10.068576  510068 start.go:83] releasing machines lock for "calico-200102", held for 12.182034083s
	I1013 22:05:10.068644  510068 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-200102
	I1013 22:05:10.087861  510068 ssh_runner.go:195] Run: cat /version.json
	I1013 22:05:10.087890  510068 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1013 22:05:10.087924  510068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-200102
	I1013 22:05:10.087979  510068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-200102
	I1013 22:05:10.110561  510068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/calico-200102/id_rsa Username:docker}
	I1013 22:05:10.110930  510068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/calico-200102/id_rsa Username:docker}
	I1013 22:05:10.283865  510068 ssh_runner.go:195] Run: systemctl --version
	I1013 22:05:10.291080  510068 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1013 22:05:10.330093  510068 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1013 22:05:10.335465  510068 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1013 22:05:10.335549  510068 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1013 22:05:10.364854  510068 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1013 22:05:10.364883  510068 start.go:495] detecting cgroup driver to use...
	I1013 22:05:10.364929  510068 detect.go:190] detected "systemd" cgroup driver on host os
	I1013 22:05:10.365017  510068 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1013 22:05:10.382335  510068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1013 22:05:10.395980  510068 docker.go:218] disabling cri-docker service (if available) ...
	I1013 22:05:10.396157  510068 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1013 22:05:10.414935  510068 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1013 22:05:10.437151  510068 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1013 22:05:10.532226  510068 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1013 22:05:10.636456  510068 docker.go:234] disabling docker service ...
	I1013 22:05:10.636527  510068 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1013 22:05:10.658108  510068 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1013 22:05:10.675059  510068 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1013 22:05:10.778198  510068 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1013 22:05:10.874890  510068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1013 22:05:10.888836  510068 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1013 22:05:10.907088  510068 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1013 22:05:10.907161  510068 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:05:10.919727  510068 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1013 22:05:10.919817  510068 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:05:10.932019  510068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:05:10.941567  510068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:05:10.951375  510068 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1013 22:05:10.961899  510068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:05:10.972671  510068 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:05:10.988921  510068 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:05:10.998801  510068 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1013 22:05:11.007356  510068 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1013 22:05:11.016548  510068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:05:11.128481  510068 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1013 22:05:11.866877  510068 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1013 22:05:11.866963  510068 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1013 22:05:11.871447  510068 start.go:563] Will wait 60s for crictl version
	I1013 22:05:11.871509  510068 ssh_runner.go:195] Run: which crictl
	I1013 22:05:11.875519  510068 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1013 22:05:11.902820  510068 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1013 22:05:11.902911  510068 ssh_runner.go:195] Run: crio --version
	I1013 22:05:11.933859  510068 ssh_runner.go:195] Run: crio --version
	I1013 22:05:11.967499  510068 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1013 22:05:11.968547  510068 cli_runner.go:164] Run: docker network inspect calico-200102 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 22:05:11.986107  510068 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1013 22:05:11.990478  510068 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 22:05:12.001588  510068 kubeadm.go:883] updating cluster {Name:calico-200102 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-200102 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1013 22:05:12.001693  510068 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 22:05:12.001735  510068 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 22:05:12.037178  510068 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 22:05:12.037208  510068 crio.go:433] Images already preloaded, skipping extraction
	I1013 22:05:12.037264  510068 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 22:05:12.066282  510068 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 22:05:12.066310  510068 cache_images.go:85] Images are preloaded, skipping loading
	I1013 22:05:12.066318  510068 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1013 22:05:12.066404  510068 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=calico-200102 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:calico-200102 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico}
	I1013 22:05:12.066509  510068 ssh_runner.go:195] Run: crio config
	I1013 22:05:12.116528  510068 cni.go:84] Creating CNI manager for "calico"
	I1013 22:05:12.116564  510068 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1013 22:05:12.116593  510068 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-200102 NodeName:calico-200102 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1013 22:05:12.116754  510068 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "calico-200102"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1013 22:05:12.116830  510068 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1013 22:05:12.126115  510068 binaries.go:44] Found k8s binaries, skipping transfer
	I1013 22:05:12.126177  510068 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1013 22:05:12.134784  510068 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1013 22:05:12.149452  510068 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1013 22:05:12.165858  510068 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1013 22:05:12.179349  510068 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1013 22:05:12.183640  510068 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 22:05:12.194961  510068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:05:12.278391  510068 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 22:05:12.301670  510068 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/calico-200102 for IP: 192.168.85.2
	I1013 22:05:12.301701  510068 certs.go:195] generating shared ca certs ...
	I1013 22:05:12.301723  510068 certs.go:227] acquiring lock for ca certs: {Name:mk5abdb742abbab05bc35d961f97579f54806d56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:05:12.301902  510068 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-226873/.minikube/ca.key
	I1013 22:05:12.301971  510068 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-226873/.minikube/proxy-client-ca.key
	I1013 22:05:12.302005  510068 certs.go:257] generating profile certs ...
	I1013 22:05:12.302088  510068 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/calico-200102/client.key
	I1013 22:05:12.302112  510068 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/calico-200102/client.crt with IP's: []
	I1013 22:05:11.813105  501664 addons.go:514] duration metric: took 692.853301ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I1013 22:05:12.057274  501664 kapi.go:214] "coredns" deployment in "kube-system" namespace and "kindnet-200102" context rescaled to 1 replicas
	W1013 22:05:10.611671  505109 pod_ready.go:104] pod "coredns-66bc5c9577-kzq9t" is not "Ready", error: <nil>
	W1013 22:05:13.111625  505109 pod_ready.go:104] pod "coredns-66bc5c9577-kzq9t" is not "Ready", error: <nil>
	W1013 22:05:12.957981  496036 pod_ready.go:104] pod "coredns-66bc5c9577-5x8dn" is not "Ready", error: <nil>
	I1013 22:05:14.957378  496036 pod_ready.go:94] pod "coredns-66bc5c9577-5x8dn" is "Ready"
	I1013 22:05:14.957409  496036 pod_ready.go:86] duration metric: took 37.006265036s for pod "coredns-66bc5c9577-5x8dn" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:05:14.960048  496036 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-505851" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:05:14.963954  496036 pod_ready.go:94] pod "etcd-default-k8s-diff-port-505851" is "Ready"
	I1013 22:05:14.964018  496036 pod_ready.go:86] duration metric: took 3.944974ms for pod "etcd-default-k8s-diff-port-505851" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:05:14.965934  496036 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-505851" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:05:14.969538  496036 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-505851" is "Ready"
	I1013 22:05:14.969561  496036 pod_ready.go:86] duration metric: took 3.602866ms for pod "kube-apiserver-default-k8s-diff-port-505851" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:05:14.971555  496036 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-505851" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:05:15.155773  496036 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-505851" is "Ready"
	I1013 22:05:15.155807  496036 pod_ready.go:86] duration metric: took 184.228441ms for pod "kube-controller-manager-default-k8s-diff-port-505851" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:05:15.356368  496036 pod_ready.go:83] waiting for pod "kube-proxy-27pnt" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:05:15.755300  496036 pod_ready.go:94] pod "kube-proxy-27pnt" is "Ready"
	I1013 22:05:15.755335  496036 pod_ready.go:86] duration metric: took 398.933791ms for pod "kube-proxy-27pnt" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:05:15.955632  496036 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-505851" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:05:16.355575  496036 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-505851" is "Ready"
	I1013 22:05:16.355608  496036 pod_ready.go:86] duration metric: took 399.945662ms for pod "kube-scheduler-default-k8s-diff-port-505851" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:05:16.355624  496036 pod_ready.go:40] duration metric: took 38.410140408s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 22:05:16.409234  496036 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1013 22:05:16.411422  496036 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-505851" cluster and "default" namespace by default
	I1013 22:05:12.767762  510068 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/calico-200102/client.crt ...
	I1013 22:05:12.767793  510068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/calico-200102/client.crt: {Name:mkb46b714d46426c42aba7afd5b837077b9a2d3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:05:12.768025  510068 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/calico-200102/client.key ...
	I1013 22:05:12.768044  510068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/calico-200102/client.key: {Name:mk9ef563cfcf73cd77c8f23e63c10a2813d8195a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:05:12.768160  510068 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/calico-200102/apiserver.key.26a633c3
	I1013 22:05:12.768180  510068 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/calico-200102/apiserver.crt.26a633c3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1013 22:05:12.893545  510068 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/calico-200102/apiserver.crt.26a633c3 ...
	I1013 22:05:12.893581  510068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/calico-200102/apiserver.crt.26a633c3: {Name:mk142a6cc57775ce69692e883da3c4477b1dcf08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:05:12.893824  510068 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/calico-200102/apiserver.key.26a633c3 ...
	I1013 22:05:12.893857  510068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/calico-200102/apiserver.key.26a633c3: {Name:mk7a017cff5c03088d8aaaebd3f515e3d2053adf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:05:12.893986  510068 certs.go:382] copying /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/calico-200102/apiserver.crt.26a633c3 -> /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/calico-200102/apiserver.crt
	I1013 22:05:12.894140  510068 certs.go:386] copying /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/calico-200102/apiserver.key.26a633c3 -> /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/calico-200102/apiserver.key
	I1013 22:05:12.894238  510068 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/calico-200102/proxy-client.key
	I1013 22:05:12.894261  510068 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/calico-200102/proxy-client.crt with IP's: []
	I1013 22:05:12.999959  510068 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/calico-200102/proxy-client.crt ...
	I1013 22:05:13.000004  510068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/calico-200102/proxy-client.crt: {Name:mkd1eea43b2f771966d0a5900a3731d27f60cf11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:05:13.000201  510068 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/calico-200102/proxy-client.key ...
	I1013 22:05:13.000223  510068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/calico-200102/proxy-client.key: {Name:mk65315728dd1a718f10fbcd810639e1b27f39e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:05:13.000442  510068 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/230929.pem (1338 bytes)
	W1013 22:05:13.000496  510068 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-226873/.minikube/certs/230929_empty.pem, impossibly tiny 0 bytes
	I1013 22:05:13.000513  510068 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca-key.pem (1675 bytes)
	I1013 22:05:13.000550  510068 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem (1078 bytes)
	I1013 22:05:13.000585  510068 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/cert.pem (1123 bytes)
	I1013 22:05:13.000618  510068 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/key.pem (1679 bytes)
	I1013 22:05:13.000688  510068 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/files/etc/ssl/certs/2309292.pem (1708 bytes)
	I1013 22:05:13.001336  510068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1013 22:05:13.024464  510068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1013 22:05:13.043041  510068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1013 22:05:13.062435  510068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1013 22:05:13.080885  510068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/calico-200102/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1013 22:05:13.100649  510068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/calico-200102/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1013 22:05:13.121211  510068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/calico-200102/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1013 22:05:13.140903  510068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/calico-200102/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1013 22:05:13.160444  510068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/files/etc/ssl/certs/2309292.pem --> /usr/share/ca-certificates/2309292.pem (1708 bytes)
	I1013 22:05:13.181619  510068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1013 22:05:13.202766  510068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/certs/230929.pem --> /usr/share/ca-certificates/230929.pem (1338 bytes)
	I1013 22:05:13.223639  510068 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1013 22:05:13.237785  510068 ssh_runner.go:195] Run: openssl version
	I1013 22:05:13.244659  510068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2309292.pem && ln -fs /usr/share/ca-certificates/2309292.pem /etc/ssl/certs/2309292.pem"
	I1013 22:05:13.254318  510068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2309292.pem
	I1013 22:05:13.258462  510068 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 13 21:24 /usr/share/ca-certificates/2309292.pem
	I1013 22:05:13.258522  510068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2309292.pem
	I1013 22:05:13.294822  510068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2309292.pem /etc/ssl/certs/3ec20f2e.0"
	I1013 22:05:13.304976  510068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1013 22:05:13.314219  510068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:05:13.318666  510068 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 13 21:18 /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:05:13.318726  510068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:05:13.355667  510068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1013 22:05:13.365323  510068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/230929.pem && ln -fs /usr/share/ca-certificates/230929.pem /etc/ssl/certs/230929.pem"
	I1013 22:05:13.374504  510068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/230929.pem
	I1013 22:05:13.378540  510068 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 13 21:24 /usr/share/ca-certificates/230929.pem
	I1013 22:05:13.378631  510068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/230929.pem
	I1013 22:05:13.416573  510068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/230929.pem /etc/ssl/certs/51391683.0"
	I1013 22:05:13.426828  510068 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1013 22:05:13.430934  510068 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1013 22:05:13.431012  510068 kubeadm.go:400] StartCluster: {Name:calico-200102 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-200102 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 22:05:13.431093  510068 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 22:05:13.431136  510068 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 22:05:13.461432  510068 cri.go:89] found id: ""
	I1013 22:05:13.461490  510068 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1013 22:05:13.470725  510068 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1013 22:05:13.479768  510068 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1013 22:05:13.479821  510068 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1013 22:05:13.488324  510068 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1013 22:05:13.488345  510068 kubeadm.go:157] found existing configuration files:
	
	I1013 22:05:13.488398  510068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1013 22:05:13.496639  510068 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1013 22:05:13.496694  510068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1013 22:05:13.506391  510068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1013 22:05:13.517088  510068 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1013 22:05:13.517148  510068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1013 22:05:13.526789  510068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1013 22:05:13.535104  510068 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1013 22:05:13.535170  510068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1013 22:05:13.543494  510068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1013 22:05:13.551901  510068 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1013 22:05:13.551963  510068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1013 22:05:13.560880  510068 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1013 22:05:13.624077  510068 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1013 22:05:13.685194  510068 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1013 22:05:13.556617  501664 node_ready.go:57] node "kindnet-200102" has "Ready":"False" status (will retry)
	W1013 22:05:15.557197  501664 node_ready.go:57] node "kindnet-200102" has "Ready":"False" status (will retry)
	W1013 22:05:17.557726  501664 node_ready.go:57] node "kindnet-200102" has "Ready":"False" status (will retry)
	W1013 22:05:15.111742  505109 pod_ready.go:104] pod "coredns-66bc5c9577-kzq9t" is not "Ready", error: <nil>
	W1013 22:05:17.111871  505109 pod_ready.go:104] pod "coredns-66bc5c9577-kzq9t" is not "Ready", error: <nil>
	W1013 22:05:20.061147  501664 node_ready.go:57] node "kindnet-200102" has "Ready":"False" status (will retry)
	I1013 22:05:22.569075  501664 node_ready.go:49] node "kindnet-200102" is "Ready"
	I1013 22:05:22.569118  501664 node_ready.go:38] duration metric: took 11.015467972s for node "kindnet-200102" to be "Ready" ...
	I1013 22:05:22.569137  501664 api_server.go:52] waiting for apiserver process to appear ...
	I1013 22:05:22.569206  501664 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 22:05:22.587946  501664 api_server.go:72] duration metric: took 11.46797891s to wait for apiserver process to appear ...
	I1013 22:05:22.587978  501664 api_server.go:88] waiting for apiserver healthz status ...
	I1013 22:05:22.588042  501664 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1013 22:05:22.593031  501664 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1013 22:05:22.594109  501664 api_server.go:141] control plane version: v1.34.1
	I1013 22:05:22.594138  501664 api_server.go:131] duration metric: took 6.152777ms to wait for apiserver health ...
	I1013 22:05:22.594148  501664 system_pods.go:43] waiting for kube-system pods to appear ...
	I1013 22:05:22.597814  501664 system_pods.go:59] 8 kube-system pods found
	I1013 22:05:22.597849  501664 system_pods.go:61] "coredns-66bc5c9577-l4nxp" [32bf1a1a-47c6-4c43-9b93-29cb1395c517] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 22:05:22.597857  501664 system_pods.go:61] "etcd-kindnet-200102" [d26b572f-f6ba-4966-b43d-ad6dae2e5ab1] Running
	I1013 22:05:22.597862  501664 system_pods.go:61] "kindnet-glhzg" [4b41b6cb-6930-47d5-ac4a-8caa5b4466e9] Running
	I1013 22:05:22.597865  501664 system_pods.go:61] "kube-apiserver-kindnet-200102" [9a5167e4-dac0-47a8-88dd-140f99bcc10c] Running
	I1013 22:05:22.597869  501664 system_pods.go:61] "kube-controller-manager-kindnet-200102" [838e6428-b6fd-428a-b2d5-2df1b586f0db] Running
	I1013 22:05:22.597873  501664 system_pods.go:61] "kube-proxy-ppbkr" [8e23e154-3fa0-4154-8630-68c1de100a77] Running
	I1013 22:05:22.597876  501664 system_pods.go:61] "kube-scheduler-kindnet-200102" [ed71ba24-be9c-48ec-94b1-136b125d8b36] Running
	I1013 22:05:22.597880  501664 system_pods.go:61] "storage-provisioner" [2767fd4d-5c53-4d5a-9a82-81cbf7cfefb3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 22:05:22.597895  501664 system_pods.go:74] duration metric: took 3.738699ms to wait for pod list to return data ...
	I1013 22:05:22.597905  501664 default_sa.go:34] waiting for default service account to be created ...
	I1013 22:05:22.600678  501664 default_sa.go:45] found service account: "default"
	I1013 22:05:22.600702  501664 default_sa.go:55] duration metric: took 2.789012ms for default service account to be created ...
	I1013 22:05:22.600714  501664 system_pods.go:116] waiting for k8s-apps to be running ...
	I1013 22:05:22.604289  501664 system_pods.go:86] 8 kube-system pods found
	I1013 22:05:22.604324  501664 system_pods.go:89] "coredns-66bc5c9577-l4nxp" [32bf1a1a-47c6-4c43-9b93-29cb1395c517] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 22:05:22.604338  501664 system_pods.go:89] "etcd-kindnet-200102" [d26b572f-f6ba-4966-b43d-ad6dae2e5ab1] Running
	I1013 22:05:22.604346  501664 system_pods.go:89] "kindnet-glhzg" [4b41b6cb-6930-47d5-ac4a-8caa5b4466e9] Running
	I1013 22:05:22.604352  501664 system_pods.go:89] "kube-apiserver-kindnet-200102" [9a5167e4-dac0-47a8-88dd-140f99bcc10c] Running
	I1013 22:05:22.604365  501664 system_pods.go:89] "kube-controller-manager-kindnet-200102" [838e6428-b6fd-428a-b2d5-2df1b586f0db] Running
	I1013 22:05:22.604370  501664 system_pods.go:89] "kube-proxy-ppbkr" [8e23e154-3fa0-4154-8630-68c1de100a77] Running
	I1013 22:05:22.604375  501664 system_pods.go:89] "kube-scheduler-kindnet-200102" [ed71ba24-be9c-48ec-94b1-136b125d8b36] Running
	I1013 22:05:22.604383  501664 system_pods.go:89] "storage-provisioner" [2767fd4d-5c53-4d5a-9a82-81cbf7cfefb3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 22:05:22.604410  501664 retry.go:31] will retry after 212.861156ms: missing components: kube-dns
	I1013 22:05:22.822132  501664 system_pods.go:86] 8 kube-system pods found
	I1013 22:05:22.822186  501664 system_pods.go:89] "coredns-66bc5c9577-l4nxp" [32bf1a1a-47c6-4c43-9b93-29cb1395c517] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 22:05:22.822194  501664 system_pods.go:89] "etcd-kindnet-200102" [d26b572f-f6ba-4966-b43d-ad6dae2e5ab1] Running
	I1013 22:05:22.822202  501664 system_pods.go:89] "kindnet-glhzg" [4b41b6cb-6930-47d5-ac4a-8caa5b4466e9] Running
	I1013 22:05:22.822215  501664 system_pods.go:89] "kube-apiserver-kindnet-200102" [9a5167e4-dac0-47a8-88dd-140f99bcc10c] Running
	I1013 22:05:22.822222  501664 system_pods.go:89] "kube-controller-manager-kindnet-200102" [838e6428-b6fd-428a-b2d5-2df1b586f0db] Running
	I1013 22:05:22.822236  501664 system_pods.go:89] "kube-proxy-ppbkr" [8e23e154-3fa0-4154-8630-68c1de100a77] Running
	I1013 22:05:22.822241  501664 system_pods.go:89] "kube-scheduler-kindnet-200102" [ed71ba24-be9c-48ec-94b1-136b125d8b36] Running
	I1013 22:05:22.822254  501664 system_pods.go:89] "storage-provisioner" [2767fd4d-5c53-4d5a-9a82-81cbf7cfefb3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 22:05:22.822273  501664 retry.go:31] will retry after 235.519358ms: missing components: kube-dns
	I1013 22:05:23.515178  510068 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1013 22:05:23.515261  510068 kubeadm.go:318] [preflight] Running pre-flight checks
	I1013 22:05:23.515375  510068 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1013 22:05:23.515455  510068 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1013 22:05:23.515509  510068 kubeadm.go:318] OS: Linux
	I1013 22:05:23.515582  510068 kubeadm.go:318] CGROUPS_CPU: enabled
	I1013 22:05:23.515662  510068 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1013 22:05:23.515749  510068 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1013 22:05:23.515829  510068 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1013 22:05:23.515899  510068 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1013 22:05:23.515983  510068 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1013 22:05:23.516092  510068 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1013 22:05:23.516163  510068 kubeadm.go:318] CGROUPS_IO: enabled
	I1013 22:05:23.516284  510068 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1013 22:05:23.516429  510068 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1013 22:05:23.516544  510068 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1013 22:05:23.516621  510068 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1013 22:05:23.518293  510068 out.go:252]   - Generating certificates and keys ...
	I1013 22:05:23.518368  510068 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1013 22:05:23.518425  510068 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1013 22:05:23.518510  510068 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1013 22:05:23.518608  510068 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1013 22:05:23.518704  510068 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1013 22:05:23.518783  510068 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1013 22:05:23.518870  510068 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1013 22:05:23.519083  510068 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [calico-200102 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1013 22:05:23.519175  510068 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1013 22:05:23.519315  510068 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [calico-200102 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1013 22:05:23.519415  510068 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1013 22:05:23.519512  510068 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1013 22:05:23.519574  510068 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1013 22:05:23.519654  510068 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1013 22:05:23.519766  510068 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1013 22:05:23.519853  510068 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1013 22:05:23.519931  510068 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1013 22:05:23.520053  510068 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1013 22:05:23.520139  510068 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1013 22:05:23.520248  510068 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1013 22:05:23.520363  510068 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1013 22:05:23.521863  510068 out.go:252]   - Booting up control plane ...
	I1013 22:05:23.521939  510068 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1013 22:05:23.522032  510068 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1013 22:05:23.522093  510068 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1013 22:05:23.522199  510068 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1013 22:05:23.522298  510068 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1013 22:05:23.522404  510068 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1013 22:05:23.522491  510068 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1013 22:05:23.522538  510068 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1013 22:05:23.522657  510068 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1013 22:05:23.522755  510068 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1013 22:05:23.522815  510068 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001913234s
	I1013 22:05:23.522911  510068 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1013 22:05:23.523011  510068 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1013 22:05:23.523106  510068 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1013 22:05:23.523176  510068 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1013 22:05:23.523287  510068 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 795.308239ms
	I1013 22:05:23.523404  510068 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 1.82867851s
	I1013 22:05:23.523512  510068 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 3.501444312s
	I1013 22:05:23.523693  510068 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1013 22:05:23.523889  510068 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1013 22:05:23.524002  510068 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1013 22:05:23.524281  510068 kubeadm.go:318] [mark-control-plane] Marking the node calico-200102 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1013 22:05:23.524372  510068 kubeadm.go:318] [bootstrap-token] Using token: mye8a6.r1jyitw9zae9z8t9
	W1013 22:05:19.611754  505109 pod_ready.go:104] pod "coredns-66bc5c9577-kzq9t" is not "Ready", error: <nil>
	W1013 22:05:22.111391  505109 pod_ready.go:104] pod "coredns-66bc5c9577-kzq9t" is not "Ready", error: <nil>
	I1013 22:05:23.525789  510068 out.go:252]   - Configuring RBAC rules ...
	I1013 22:05:23.525968  510068 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1013 22:05:23.526131  510068 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1013 22:05:23.526323  510068 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1013 22:05:23.526500  510068 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1013 22:05:23.526664  510068 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1013 22:05:23.526794  510068 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1013 22:05:23.526960  510068 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1013 22:05:23.527074  510068 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1013 22:05:23.527151  510068 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1013 22:05:23.527167  510068 kubeadm.go:318] 
	I1013 22:05:23.527243  510068 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1013 22:05:23.527260  510068 kubeadm.go:318] 
	I1013 22:05:23.527350  510068 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1013 22:05:23.527366  510068 kubeadm.go:318] 
	I1013 22:05:23.527398  510068 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1013 22:05:23.527479  510068 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1013 22:05:23.527556  510068 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1013 22:05:23.527571  510068 kubeadm.go:318] 
	I1013 22:05:23.527644  510068 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1013 22:05:23.527653  510068 kubeadm.go:318] 
	I1013 22:05:23.527714  510068 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1013 22:05:23.527724  510068 kubeadm.go:318] 
	I1013 22:05:23.527788  510068 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1013 22:05:23.527884  510068 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1013 22:05:23.527959  510068 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1013 22:05:23.527972  510068 kubeadm.go:318] 
	I1013 22:05:23.528097  510068 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1013 22:05:23.528185  510068 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1013 22:05:23.528190  510068 kubeadm.go:318] 
	I1013 22:05:23.528280  510068 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token mye8a6.r1jyitw9zae9z8t9 \
	I1013 22:05:23.528396  510068 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:00bf5d6d0f4ef7bd9334b23e90d8fd2b7e452995fe6a96d4fd7aebd6540a9956 \
	I1013 22:05:23.528423  510068 kubeadm.go:318] 	--control-plane 
	I1013 22:05:23.528428  510068 kubeadm.go:318] 
	I1013 22:05:23.528525  510068 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1013 22:05:23.528530  510068 kubeadm.go:318] 
	I1013 22:05:23.528631  510068 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token mye8a6.r1jyitw9zae9z8t9 \
	I1013 22:05:23.528767  510068 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:00bf5d6d0f4ef7bd9334b23e90d8fd2b7e452995fe6a96d4fd7aebd6540a9956 
	I1013 22:05:23.528778  510068 cni.go:84] Creating CNI manager for "calico"
	I1013 22:05:23.531221  510068 out.go:179] * Configuring Calico (Container Networking Interface) ...
	I1013 22:05:23.061843  501664 system_pods.go:86] 8 kube-system pods found
	I1013 22:05:23.061881  501664 system_pods.go:89] "coredns-66bc5c9577-l4nxp" [32bf1a1a-47c6-4c43-9b93-29cb1395c517] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 22:05:23.061889  501664 system_pods.go:89] "etcd-kindnet-200102" [d26b572f-f6ba-4966-b43d-ad6dae2e5ab1] Running
	I1013 22:05:23.061893  501664 system_pods.go:89] "kindnet-glhzg" [4b41b6cb-6930-47d5-ac4a-8caa5b4466e9] Running
	I1013 22:05:23.061897  501664 system_pods.go:89] "kube-apiserver-kindnet-200102" [9a5167e4-dac0-47a8-88dd-140f99bcc10c] Running
	I1013 22:05:23.061900  501664 system_pods.go:89] "kube-controller-manager-kindnet-200102" [838e6428-b6fd-428a-b2d5-2df1b586f0db] Running
	I1013 22:05:23.061905  501664 system_pods.go:89] "kube-proxy-ppbkr" [8e23e154-3fa0-4154-8630-68c1de100a77] Running
	I1013 22:05:23.061908  501664 system_pods.go:89] "kube-scheduler-kindnet-200102" [ed71ba24-be9c-48ec-94b1-136b125d8b36] Running
	I1013 22:05:23.061913  501664 system_pods.go:89] "storage-provisioner" [2767fd4d-5c53-4d5a-9a82-81cbf7cfefb3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 22:05:23.061932  501664 retry.go:31] will retry after 429.009418ms: missing components: kube-dns
	I1013 22:05:23.494806  501664 system_pods.go:86] 8 kube-system pods found
	I1013 22:05:23.494859  501664 system_pods.go:89] "coredns-66bc5c9577-l4nxp" [32bf1a1a-47c6-4c43-9b93-29cb1395c517] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 22:05:23.494869  501664 system_pods.go:89] "etcd-kindnet-200102" [d26b572f-f6ba-4966-b43d-ad6dae2e5ab1] Running
	I1013 22:05:23.494875  501664 system_pods.go:89] "kindnet-glhzg" [4b41b6cb-6930-47d5-ac4a-8caa5b4466e9] Running
	I1013 22:05:23.494880  501664 system_pods.go:89] "kube-apiserver-kindnet-200102" [9a5167e4-dac0-47a8-88dd-140f99bcc10c] Running
	I1013 22:05:23.494885  501664 system_pods.go:89] "kube-controller-manager-kindnet-200102" [838e6428-b6fd-428a-b2d5-2df1b586f0db] Running
	I1013 22:05:23.494891  501664 system_pods.go:89] "kube-proxy-ppbkr" [8e23e154-3fa0-4154-8630-68c1de100a77] Running
	I1013 22:05:23.494896  501664 system_pods.go:89] "kube-scheduler-kindnet-200102" [ed71ba24-be9c-48ec-94b1-136b125d8b36] Running
	I1013 22:05:23.494903  501664 system_pods.go:89] "storage-provisioner" [2767fd4d-5c53-4d5a-9a82-81cbf7cfefb3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 22:05:23.494924  501664 retry.go:31] will retry after 417.528613ms: missing components: kube-dns
	I1013 22:05:23.917336  501664 system_pods.go:86] 8 kube-system pods found
	I1013 22:05:23.917367  501664 system_pods.go:89] "coredns-66bc5c9577-l4nxp" [32bf1a1a-47c6-4c43-9b93-29cb1395c517] Running
	I1013 22:05:23.917373  501664 system_pods.go:89] "etcd-kindnet-200102" [d26b572f-f6ba-4966-b43d-ad6dae2e5ab1] Running
	I1013 22:05:23.917377  501664 system_pods.go:89] "kindnet-glhzg" [4b41b6cb-6930-47d5-ac4a-8caa5b4466e9] Running
	I1013 22:05:23.917381  501664 system_pods.go:89] "kube-apiserver-kindnet-200102" [9a5167e4-dac0-47a8-88dd-140f99bcc10c] Running
	I1013 22:05:23.917384  501664 system_pods.go:89] "kube-controller-manager-kindnet-200102" [838e6428-b6fd-428a-b2d5-2df1b586f0db] Running
	I1013 22:05:23.917388  501664 system_pods.go:89] "kube-proxy-ppbkr" [8e23e154-3fa0-4154-8630-68c1de100a77] Running
	I1013 22:05:23.917391  501664 system_pods.go:89] "kube-scheduler-kindnet-200102" [ed71ba24-be9c-48ec-94b1-136b125d8b36] Running
	I1013 22:05:23.917395  501664 system_pods.go:89] "storage-provisioner" [2767fd4d-5c53-4d5a-9a82-81cbf7cfefb3] Running
	I1013 22:05:23.917404  501664 system_pods.go:126] duration metric: took 1.316682025s to wait for k8s-apps to be running ...
	I1013 22:05:23.917414  501664 system_svc.go:44] waiting for kubelet service to be running ....
	I1013 22:05:23.917466  501664 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 22:05:23.935495  501664 system_svc.go:56] duration metric: took 18.065606ms WaitForService to wait for kubelet
	I1013 22:05:23.935546  501664 kubeadm.go:586] duration metric: took 12.815588536s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 22:05:23.935574  501664 node_conditions.go:102] verifying NodePressure condition ...
	I1013 22:05:23.939341  501664 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1013 22:05:23.939377  501664 node_conditions.go:123] node cpu capacity is 8
	I1013 22:05:23.939393  501664 node_conditions.go:105] duration metric: took 3.812781ms to run NodePressure ...
	I1013 22:05:23.939409  501664 start.go:241] waiting for startup goroutines ...
	I1013 22:05:23.939422  501664 start.go:246] waiting for cluster config update ...
	I1013 22:05:23.939436  501664 start.go:255] writing updated cluster config ...
	I1013 22:05:23.939771  501664 ssh_runner.go:195] Run: rm -f paused
	I1013 22:05:23.945463  501664 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 22:05:23.950180  501664 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-l4nxp" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:05:23.955649  501664 pod_ready.go:94] pod "coredns-66bc5c9577-l4nxp" is "Ready"
	I1013 22:05:23.955682  501664 pod_ready.go:86] duration metric: took 5.476054ms for pod "coredns-66bc5c9577-l4nxp" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:05:23.958233  501664 pod_ready.go:83] waiting for pod "etcd-kindnet-200102" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:05:23.963222  501664 pod_ready.go:94] pod "etcd-kindnet-200102" is "Ready"
	I1013 22:05:23.963255  501664 pod_ready.go:86] duration metric: took 4.994304ms for pod "etcd-kindnet-200102" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:05:23.965583  501664 pod_ready.go:83] waiting for pod "kube-apiserver-kindnet-200102" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:05:23.970232  501664 pod_ready.go:94] pod "kube-apiserver-kindnet-200102" is "Ready"
	I1013 22:05:23.970260  501664 pod_ready.go:86] duration metric: took 4.653962ms for pod "kube-apiserver-kindnet-200102" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:05:23.972668  501664 pod_ready.go:83] waiting for pod "kube-controller-manager-kindnet-200102" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:05:24.351055  501664 pod_ready.go:94] pod "kube-controller-manager-kindnet-200102" is "Ready"
	I1013 22:05:24.351089  501664 pod_ready.go:86] duration metric: took 378.386921ms for pod "kube-controller-manager-kindnet-200102" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:05:24.551099  501664 pod_ready.go:83] waiting for pod "kube-proxy-ppbkr" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:05:24.950531  501664 pod_ready.go:94] pod "kube-proxy-ppbkr" is "Ready"
	I1013 22:05:24.950562  501664 pod_ready.go:86] duration metric: took 399.435549ms for pod "kube-proxy-ppbkr" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:05:25.151177  501664 pod_ready.go:83] waiting for pod "kube-scheduler-kindnet-200102" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:05:25.550933  501664 pod_ready.go:94] pod "kube-scheduler-kindnet-200102" is "Ready"
	I1013 22:05:25.550968  501664 pod_ready.go:86] duration metric: took 399.763244ms for pod "kube-scheduler-kindnet-200102" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:05:25.551025  501664 pod_ready.go:40] duration metric: took 1.605516253s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 22:05:25.599669  501664 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1013 22:05:25.601833  501664 out.go:179] * Done! kubectl is now configured to use "kindnet-200102" cluster and "default" namespace by default
	I1013 22:05:23.533882  510068 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1013 22:05:23.533909  510068 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (539470 bytes)
	I1013 22:05:23.552779  510068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1013 22:05:24.430184  510068 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1013 22:05:24.430344  510068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:05:24.430455  510068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes calico-200102 minikube.k8s.io/updated_at=2025_10_13T22_05_24_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=18273cc699fc357fbf4f93654efe4966698a9f22 minikube.k8s.io/name=calico-200102 minikube.k8s.io/primary=true
	I1013 22:05:24.442633  510068 ops.go:34] apiserver oom_adj: -16
	I1013 22:05:24.505508  510068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:05:25.006121  510068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:05:25.506207  510068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:05:26.005725  510068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:05:26.506224  510068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:05:27.006060  510068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:05:27.505887  510068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1013 22:05:24.114346  505109 pod_ready.go:104] pod "coredns-66bc5c9577-kzq9t" is not "Ready", error: <nil>
	W1013 22:05:26.611754  505109 pod_ready.go:104] pod "coredns-66bc5c9577-kzq9t" is not "Ready", error: <nil>
	I1013 22:05:28.005547  510068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:05:28.506422  510068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:05:28.594025  510068 kubeadm.go:1113] duration metric: took 4.163747427s to wait for elevateKubeSystemPrivileges
	I1013 22:05:28.594067  510068 kubeadm.go:402] duration metric: took 15.163058578s to StartCluster
	I1013 22:05:28.594091  510068 settings.go:142] acquiring lock: {Name:mk13008e3b2fce0e368bddbf00d43b8340210d33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:05:28.594198  510068 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-226873/kubeconfig
	I1013 22:05:28.596275  510068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/kubeconfig: {Name:mk2f336b13d09ff6e6da9e86905651541ce51ca0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:05:28.596506  510068 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1013 22:05:28.596527  510068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1013 22:05:28.596595  510068 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1013 22:05:28.596690  510068 addons.go:69] Setting storage-provisioner=true in profile "calico-200102"
	I1013 22:05:28.596718  510068 addons.go:238] Setting addon storage-provisioner=true in "calico-200102"
	I1013 22:05:28.596720  510068 config.go:182] Loaded profile config "calico-200102": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:05:28.596756  510068 host.go:66] Checking if "calico-200102" exists ...
	I1013 22:05:28.596776  510068 addons.go:69] Setting default-storageclass=true in profile "calico-200102"
	I1013 22:05:28.596800  510068 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-200102"
	I1013 22:05:28.597274  510068 cli_runner.go:164] Run: docker container inspect calico-200102 --format={{.State.Status}}
	I1013 22:05:28.597352  510068 cli_runner.go:164] Run: docker container inspect calico-200102 --format={{.State.Status}}
	I1013 22:05:28.598213  510068 out.go:179] * Verifying Kubernetes components...
	I1013 22:05:28.599521  510068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:05:28.625842  510068 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1013 22:05:28.627374  510068 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 22:05:28.627396  510068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1013 22:05:28.627457  510068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-200102
	I1013 22:05:28.630419  510068 addons.go:238] Setting addon default-storageclass=true in "calico-200102"
	I1013 22:05:28.630464  510068 host.go:66] Checking if "calico-200102" exists ...
	I1013 22:05:28.630864  510068 cli_runner.go:164] Run: docker container inspect calico-200102 --format={{.State.Status}}
	I1013 22:05:28.653086  510068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/calico-200102/id_rsa Username:docker}
	I1013 22:05:28.657165  510068 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1013 22:05:28.657305  510068 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1013 22:05:28.657377  510068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-200102
	I1013 22:05:28.680285  510068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/calico-200102/id_rsa Username:docker}
	I1013 22:05:28.700254  510068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1013 22:05:28.768442  510068 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 22:05:28.784087  510068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 22:05:28.799160  510068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1013 22:05:28.908126  510068 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1013 22:05:28.910092  510068 node_ready.go:35] waiting up to 15m0s for node "calico-200102" to be "Ready" ...
	I1013 22:05:29.140656  510068 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	
	
	==> CRI-O <==
	Oct 13 22:04:48 default-k8s-diff-port-505851 crio[562]: time="2025-10-13T22:04:48.464511356Z" level=info msg="Created container e976a19b88a83fe02afbf94aefc984bcec5775ad24483eea6e341b91a0ab5470: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-2xpgc/kubernetes-dashboard" id=c15e343e-21dc-4398-8a35-c8477ad76dd5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:04:48 default-k8s-diff-port-505851 crio[562]: time="2025-10-13T22:04:48.465537397Z" level=info msg="Starting container: e976a19b88a83fe02afbf94aefc984bcec5775ad24483eea6e341b91a0ab5470" id=bc337f74-9c3f-4589-bc02-7159cbb3ab88 name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 22:04:48 default-k8s-diff-port-505851 crio[562]: time="2025-10-13T22:04:48.46795386Z" level=info msg="Started container" PID=1721 containerID=e976a19b88a83fe02afbf94aefc984bcec5775ad24483eea6e341b91a0ab5470 description=kubernetes-dashboard/kubernetes-dashboard-855c9754f9-2xpgc/kubernetes-dashboard id=bc337f74-9c3f-4589-bc02-7159cbb3ab88 name=/runtime.v1.RuntimeService/StartContainer sandboxID=02fe94f0f6776d94f67e73e143858b641d2270e4b46439a1f9d3e19c9ef4fb76
	Oct 13 22:05:04 default-k8s-diff-port-505851 crio[562]: time="2025-10-13T22:05:04.032565808Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=a5e3ba48-6a9d-47e1-89ae-c9da8f844cb4 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:05:04 default-k8s-diff-port-505851 crio[562]: time="2025-10-13T22:05:04.03417933Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=ac25a4ca-d15f-45eb-b5b3-e3577b1c35ef name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:05:04 default-k8s-diff-port-505851 crio[562]: time="2025-10-13T22:05:04.038115606Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k87hj/dashboard-metrics-scraper" id=cdbc32b9-5d3b-4e38-9268-48c93de44bb1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:05:04 default-k8s-diff-port-505851 crio[562]: time="2025-10-13T22:05:04.039288657Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:05:04 default-k8s-diff-port-505851 crio[562]: time="2025-10-13T22:05:04.048363923Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:05:04 default-k8s-diff-port-505851 crio[562]: time="2025-10-13T22:05:04.052168803Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:05:04 default-k8s-diff-port-505851 crio[562]: time="2025-10-13T22:05:04.092863746Z" level=info msg="Created container ae38e1db9769544ad8187b6bca19aaae3cebfcbaec340f2d13559004fffb61c7: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k87hj/dashboard-metrics-scraper" id=cdbc32b9-5d3b-4e38-9268-48c93de44bb1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:05:04 default-k8s-diff-port-505851 crio[562]: time="2025-10-13T22:05:04.094722994Z" level=info msg="Starting container: ae38e1db9769544ad8187b6bca19aaae3cebfcbaec340f2d13559004fffb61c7" id=470361e5-745e-4ebe-be2d-31001ac0bd81 name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 22:05:04 default-k8s-diff-port-505851 crio[562]: time="2025-10-13T22:05:04.098821906Z" level=info msg="Started container" PID=1737 containerID=ae38e1db9769544ad8187b6bca19aaae3cebfcbaec340f2d13559004fffb61c7 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k87hj/dashboard-metrics-scraper id=470361e5-745e-4ebe-be2d-31001ac0bd81 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e6d149ff7fed67aa6cd26de59f4b11938e5c5377d70a88a5250d0026ed632337
	Oct 13 22:05:04 default-k8s-diff-port-505851 crio[562]: time="2025-10-13T22:05:04.163532913Z" level=info msg="Removing container: 4f25936df743ff4c35d0faa599504b74c2e0654ccc9bf715f073dbac179b0ab8" id=11c739a8-1595-4fa6-9206-92ca7b590ccf name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 13 22:05:04 default-k8s-diff-port-505851 crio[562]: time="2025-10-13T22:05:04.181272739Z" level=info msg="Removed container 4f25936df743ff4c35d0faa599504b74c2e0654ccc9bf715f073dbac179b0ab8: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k87hj/dashboard-metrics-scraper" id=11c739a8-1595-4fa6-9206-92ca7b590ccf name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 13 22:05:08 default-k8s-diff-port-505851 crio[562]: time="2025-10-13T22:05:08.176020758Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=53664a35-4341-411a-a88a-e14834b94232 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:05:08 default-k8s-diff-port-505851 crio[562]: time="2025-10-13T22:05:08.176981109Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=e2dc8d86-717e-4e99-91c2-0944da48aafb name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:05:08 default-k8s-diff-port-505851 crio[562]: time="2025-10-13T22:05:08.178093492Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=1c200446-263a-46d4-bcc7-85ca149affd9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:05:08 default-k8s-diff-port-505851 crio[562]: time="2025-10-13T22:05:08.178404367Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:05:08 default-k8s-diff-port-505851 crio[562]: time="2025-10-13T22:05:08.184290224Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:05:08 default-k8s-diff-port-505851 crio[562]: time="2025-10-13T22:05:08.184492274Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/c4bc52c201c482b143c8db07a5e15f76758faf44781cff564cd1d01f76b4459e/merged/etc/passwd: no such file or directory"
	Oct 13 22:05:08 default-k8s-diff-port-505851 crio[562]: time="2025-10-13T22:05:08.184531803Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/c4bc52c201c482b143c8db07a5e15f76758faf44781cff564cd1d01f76b4459e/merged/etc/group: no such file or directory"
	Oct 13 22:05:08 default-k8s-diff-port-505851 crio[562]: time="2025-10-13T22:05:08.185159006Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:05:08 default-k8s-diff-port-505851 crio[562]: time="2025-10-13T22:05:08.218657405Z" level=info msg="Created container 62e1fa758a47ee529eab2178badec20856414d8ddeb60f0cc0c72ffdb14dc220: kube-system/storage-provisioner/storage-provisioner" id=1c200446-263a-46d4-bcc7-85ca149affd9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:05:08 default-k8s-diff-port-505851 crio[562]: time="2025-10-13T22:05:08.219413991Z" level=info msg="Starting container: 62e1fa758a47ee529eab2178badec20856414d8ddeb60f0cc0c72ffdb14dc220" id=5b6f0bd7-3a9c-4725-828d-ef04f08b421f name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 22:05:08 default-k8s-diff-port-505851 crio[562]: time="2025-10-13T22:05:08.221578526Z" level=info msg="Started container" PID=1751 containerID=62e1fa758a47ee529eab2178badec20856414d8ddeb60f0cc0c72ffdb14dc220 description=kube-system/storage-provisioner/storage-provisioner id=5b6f0bd7-3a9c-4725-828d-ef04f08b421f name=/runtime.v1.RuntimeService/StartContainer sandboxID=349e6e12e6a2f647c2249f26142e1cda0e6da42211083b86476a891760e4bb9d
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	62e1fa758a47e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           24 seconds ago      Running             storage-provisioner         1                   349e6e12e6a2f       storage-provisioner                                    kube-system
	ae38e1db97695       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           28 seconds ago      Exited              dashboard-metrics-scraper   2                   e6d149ff7fed6       dashboard-metrics-scraper-6ffb444bf9-k87hj             kubernetes-dashboard
	e976a19b88a83       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   43 seconds ago      Running             kubernetes-dashboard        0                   02fe94f0f6776       kubernetes-dashboard-855c9754f9-2xpgc                  kubernetes-dashboard
	47a2f7003ce93       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           54 seconds ago      Running             coredns                     0                   945197391dfe9       coredns-66bc5c9577-5x8dn                               kube-system
	f51ad4d1b0bc3       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           54 seconds ago      Running             busybox                     1                   e1c80b27dcf65       busybox                                                default
	73688ac341637       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           54 seconds ago      Running             kube-proxy                  0                   7d26cb2a98598       kube-proxy-27pnt                                       kube-system
	56588477c61cd       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           54 seconds ago      Running             kindnet-cni                 0                   e29b793bb5f80       kindnet-m5whc                                          kube-system
	648d22473246e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           54 seconds ago      Exited              storage-provisioner         0                   349e6e12e6a2f       storage-provisioner                                    kube-system
	adda782c2ba2a       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           57 seconds ago      Running             kube-controller-manager     0                   a42eec82a0ad5       kube-controller-manager-default-k8s-diff-port-505851   kube-system
	90e2257cdef16       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           57 seconds ago      Running             kube-scheduler              0                   d5f75859c9e3a       kube-scheduler-default-k8s-diff-port-505851            kube-system
	a4123f9428043       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           57 seconds ago      Running             kube-apiserver              0                   2ee5dd07f5855       kube-apiserver-default-k8s-diff-port-505851            kube-system
	4e42bb1ca9412       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           57 seconds ago      Running             etcd                        0                   98c579e62ad92       etcd-default-k8s-diff-port-505851                      kube-system
	
	
	==> coredns [47a2f7003ce93cda6369bdcfca70a589ca8b8c7e50b0ec90f8b055885ba36ed6] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40613 - 45690 "HINFO IN 6117315671624123169.9074301599654801202. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.121960443s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-505851
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-505851
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=18273cc699fc357fbf4f93654efe4966698a9f22
	                    minikube.k8s.io/name=default-k8s-diff-port-505851
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_13T22_03_39_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 13 Oct 2025 22:03:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-505851
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 13 Oct 2025 22:05:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 13 Oct 2025 22:05:27 +0000   Mon, 13 Oct 2025 22:03:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 13 Oct 2025 22:05:27 +0000   Mon, 13 Oct 2025 22:03:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 13 Oct 2025 22:05:27 +0000   Mon, 13 Oct 2025 22:03:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 13 Oct 2025 22:05:27 +0000   Mon, 13 Oct 2025 22:03:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-505851
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863444Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863444Ki
	  pods:               110
	System Info:
	  Machine ID:                 66f4c9907daa2bafbf78246f68ed04ba
	  System UUID:                ff284ab0-6ab9-4288-9f40-64d181496243
	  Boot ID:                    981b04c4-08c1-4321-af44-0fa889f5f1d8
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-66bc5c9577-5x8dn                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     108s
	  kube-system                 etcd-default-k8s-diff-port-505851                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         114s
	  kube-system                 kindnet-m5whc                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      109s
	  kube-system                 kube-apiserver-default-k8s-diff-port-505851             250m (3%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-505851    200m (2%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-proxy-27pnt                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-scheduler-default-k8s-diff-port-505851             100m (1%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-k87hj              0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-2xpgc                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 107s                 kube-proxy       
	  Normal  Starting                 54s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  119s (x8 over 119s)  kubelet          Node default-k8s-diff-port-505851 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    119s (x8 over 119s)  kubelet          Node default-k8s-diff-port-505851 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     119s (x8 over 119s)  kubelet          Node default-k8s-diff-port-505851 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     114s                 kubelet          Node default-k8s-diff-port-505851 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  114s                 kubelet          Node default-k8s-diff-port-505851 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    114s                 kubelet          Node default-k8s-diff-port-505851 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 114s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           110s                 node-controller  Node default-k8s-diff-port-505851 event: Registered Node default-k8s-diff-port-505851 in Controller
	  Normal  NodeReady                97s                  kubelet          Node default-k8s-diff-port-505851 status is now: NodeReady
	  Normal  Starting                 58s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  58s (x8 over 58s)    kubelet          Node default-k8s-diff-port-505851 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    58s (x8 over 58s)    kubelet          Node default-k8s-diff-port-505851 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     58s (x8 over 58s)    kubelet          Node default-k8s-diff-port-505851 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           53s                  node-controller  Node default-k8s-diff-port-505851 event: Registered Node default-k8s-diff-port-505851 in Controller
	
	
	==> dmesg <==
	[  +0.099627] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.027042] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.612816] kauditd_printk_skb: 47 callbacks suppressed
	[Oct13 21:21] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +1.026013] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +1.023870] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +1.023931] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +1.023844] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +1.023883] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +2.047734] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +4.031606] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +8.511203] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[ +16.382178] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[Oct13 21:22] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	
	
	==> etcd [4e42bb1ca9412735b924cae876a0503b479855539f2a50a515e9f235dd2a15ee] <==
	{"level":"warn","ts":"2025-10-13T22:04:35.785142Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:04:35.793476Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:04:35.800031Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:04:35.807586Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:04:35.814223Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:04:35.832674Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:04:35.839830Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:04:35.847092Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:04:35.901803Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48494","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-13T22:04:41.874509Z","caller":"traceutil/trace.go:172","msg":"trace[61837538] transaction","detail":"{read_only:false; response_revision:546; number_of_response:1; }","duration":"118.415396ms","start":"2025-10-13T22:04:41.756072Z","end":"2025-10-13T22:04:41.874488Z","steps":["trace[61837538] 'process raft request'  (duration: 41.476629ms)","trace[61837538] 'compare'  (duration: 76.800513ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-13T22:04:42.057176Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"103.254584ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-5x8dn\" limit:1 ","response":"range_response_count:1 size:5944"}
	{"level":"info","ts":"2025-10-13T22:04:42.057267Z","caller":"traceutil/trace.go:172","msg":"trace[1749292618] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-5x8dn; range_end:; response_count:1; response_revision:548; }","duration":"103.370668ms","start":"2025-10-13T22:04:41.953885Z","end":"2025-10-13T22:04:42.057256Z","steps":["trace[1749292618] 'agreement among raft nodes before linearized reading'  (duration: 86.864245ms)","trace[1749292618] 'range keys from in-memory index tree'  (duration: 16.29555ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-13T22:04:42.057204Z","caller":"traceutil/trace.go:172","msg":"trace[1173754598] transaction","detail":"{read_only:false; response_revision:549; number_of_response:1; }","duration":"144.925267ms","start":"2025-10-13T22:04:41.912254Z","end":"2025-10-13T22:04:42.057179Z","steps":["trace[1173754598] 'process raft request'  (duration: 128.567428ms)","trace[1173754598] 'compare'  (duration: 16.225004ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-13T22:04:42.196570Z","caller":"traceutil/trace.go:172","msg":"trace[574587180] transaction","detail":"{read_only:false; response_revision:550; number_of_response:1; }","duration":"134.708251ms","start":"2025-10-13T22:04:42.061837Z","end":"2025-10-13T22:04:42.196545Z","steps":["trace[574587180] 'process raft request'  (duration: 125.849322ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-13T22:04:42.375114Z","caller":"traceutil/trace.go:172","msg":"trace[164140328] transaction","detail":"{read_only:false; response_revision:551; number_of_response:1; }","duration":"173.218299ms","start":"2025-10-13T22:04:42.201880Z","end":"2025-10-13T22:04:42.375098Z","steps":["trace[164140328] 'process raft request'  (duration: 173.022409ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-13T22:04:42.512082Z","caller":"traceutil/trace.go:172","msg":"trace[1934766990] transaction","detail":"{read_only:false; response_revision:552; number_of_response:1; }","duration":"132.552268ms","start":"2025-10-13T22:04:42.379511Z","end":"2025-10-13T22:04:42.512064Z","steps":["trace[1934766990] 'process raft request'  (duration: 98.435319ms)","trace[1934766990] 'compare'  (duration: 33.956985ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-13T22:04:42.711264Z","caller":"traceutil/trace.go:172","msg":"trace[1507671136] transaction","detail":"{read_only:false; response_revision:553; number_of_response:1; }","duration":"194.662417ms","start":"2025-10-13T22:04:42.516574Z","end":"2025-10-13T22:04:42.711236Z","steps":["trace[1507671136] 'process raft request'  (duration: 127.876323ms)","trace[1507671136] 'compare'  (duration: 66.631691ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-13T22:04:42.946590Z","caller":"traceutil/trace.go:172","msg":"trace[467698538] transaction","detail":"{read_only:false; response_revision:557; number_of_response:1; }","duration":"161.986081ms","start":"2025-10-13T22:04:42.784587Z","end":"2025-10-13T22:04:42.946574Z","steps":["trace[467698538] 'process raft request'  (duration: 129.686885ms)","trace[467698538] 'compare'  (duration: 32.173011ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-13T22:04:43.067772Z","caller":"traceutil/trace.go:172","msg":"trace[1360641468] linearizableReadLoop","detail":"{readStateIndex:585; appliedIndex:585; }","duration":"114.528233ms","start":"2025-10-13T22:04:42.953215Z","end":"2025-10-13T22:04:43.067743Z","steps":["trace[1360641468] 'read index received'  (duration: 114.515674ms)","trace[1360641468] 'applied index is now lower than readState.Index'  (duration: 11.359µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-13T22:04:43.078280Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"125.02167ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-5x8dn\" limit:1 ","response":"range_response_count:1 size:5944"}
	{"level":"info","ts":"2025-10-13T22:04:43.078354Z","caller":"traceutil/trace.go:172","msg":"trace[1023813438] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-5x8dn; range_end:; response_count:1; response_revision:557; }","duration":"125.130272ms","start":"2025-10-13T22:04:42.953203Z","end":"2025-10-13T22:04:43.078333Z","steps":["trace[1023813438] 'agreement among raft nodes before linearized reading'  (duration: 114.619246ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-13T22:04:43.078291Z","caller":"traceutil/trace.go:172","msg":"trace[1416416100] transaction","detail":"{read_only:false; response_revision:558; number_of_response:1; }","duration":"126.900256ms","start":"2025-10-13T22:04:42.951374Z","end":"2025-10-13T22:04:43.078275Z","steps":["trace[1416416100] 'process raft request'  (duration: 116.411456ms)","trace[1416416100] 'compare'  (duration: 10.380665ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-13T22:04:43.326043Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"147.324526ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/default/default-k8s-diff-port-505851.186e2c2c81dd20d9\" limit:1 ","response":"range_response_count:1 size:793"}
	{"level":"info","ts":"2025-10-13T22:04:43.326143Z","caller":"traceutil/trace.go:172","msg":"trace[151872554] range","detail":"{range_begin:/registry/events/default/default-k8s-diff-port-505851.186e2c2c81dd20d9; range_end:; response_count:1; response_revision:562; }","duration":"147.40721ms","start":"2025-10-13T22:04:43.178686Z","end":"2025-10-13T22:04:43.326093Z","steps":["trace[151872554] 'range keys from in-memory index tree'  (duration: 147.157635ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-13T22:04:43.459219Z","caller":"traceutil/trace.go:172","msg":"trace[482107203] transaction","detail":"{read_only:false; response_revision:563; number_of_response:1; }","duration":"130.963985ms","start":"2025-10-13T22:04:43.328235Z","end":"2025-10-13T22:04:43.459199Z","steps":["trace[482107203] 'process raft request'  (duration: 130.832202ms)"],"step_count":1}
	
	
	==> kernel <==
	 22:05:32 up  1:48,  0 user,  load average: 5.87, 4.21, 5.86
	Linux default-k8s-diff-port-505851 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [56588477c61cdaf31579516f71a44486912511726118d920501dc6964a03af29] <==
	I1013 22:04:37.641829       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1013 22:04:37.642298       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1013 22:04:37.642537       1 main.go:148] setting mtu 1500 for CNI 
	I1013 22:04:37.642561       1 main.go:178] kindnetd IP family: "ipv4"
	I1013 22:04:37.642582       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-13T22:04:37Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1013 22:04:37.888589       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1013 22:04:37.889283       1 controller.go:381] "Waiting for informer caches to sync"
	I1013 22:04:37.889312       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1013 22:04:37.889466       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1013 22:04:38.440712       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1013 22:04:38.440783       1 metrics.go:72] Registering metrics
	I1013 22:04:38.440873       1 controller.go:711] "Syncing nftables rules"
	I1013 22:04:47.889251       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1013 22:04:47.889316       1 main.go:301] handling current node
	I1013 22:04:57.891127       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1013 22:04:57.891176       1 main.go:301] handling current node
	I1013 22:05:07.889365       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1013 22:05:07.889412       1 main.go:301] handling current node
	I1013 22:05:17.889055       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1013 22:05:17.889093       1 main.go:301] handling current node
	I1013 22:05:27.888973       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1013 22:05:27.889048       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a4123f94280435b49d4a87e687509166fcba7b0fb561e6b74a0f94b565fb9fc7] <==
	I1013 22:04:36.368675       1 autoregister_controller.go:144] Starting autoregister controller
	I1013 22:04:36.368680       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1013 22:04:36.368686       1 cache.go:39] Caches are synced for autoregister controller
	I1013 22:04:36.368507       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1013 22:04:36.368925       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1013 22:04:36.369029       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1013 22:04:36.369071       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	E1013 22:04:36.375236       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1013 22:04:36.375898       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1013 22:04:36.398243       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1013 22:04:36.408585       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1013 22:04:36.408611       1 policy_source.go:240] refreshing policies
	I1013 22:04:36.424233       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1013 22:04:36.631873       1 controller.go:667] quota admission added evaluator for: namespaces
	I1013 22:04:36.667303       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1013 22:04:36.687384       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1013 22:04:36.696262       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1013 22:04:36.703583       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1013 22:04:36.750144       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.146.96"}
	I1013 22:04:36.762317       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.209.166"}
	I1013 22:04:37.271819       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1013 22:04:40.170304       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1013 22:04:40.219677       1 controller.go:667] quota admission added evaluator for: endpoints
	I1013 22:04:40.219686       1 controller.go:667] quota admission added evaluator for: endpoints
	I1013 22:04:40.273136       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [adda782c2ba2a3f6139979f78f26db41eb8daa3211f0cadcb2a7c82193618fea] <==
	I1013 22:04:39.686373       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1013 22:04:39.697678       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1013 22:04:39.703059       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 22:04:39.703080       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1013 22:04:39.703090       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1013 22:04:39.715623       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1013 22:04:39.715648       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1013 22:04:39.715834       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1013 22:04:39.715871       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1013 22:04:39.715909       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1013 22:04:39.716033       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1013 22:04:39.716144       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1013 22:04:39.716214       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1013 22:04:39.716405       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1013 22:04:39.716588       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1013 22:04:39.718037       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1013 22:04:39.720287       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1013 22:04:39.723590       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1013 22:04:39.725972       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1013 22:04:39.726055       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1013 22:04:39.726141       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-505851"
	I1013 22:04:39.726190       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1013 22:04:39.728398       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1013 22:04:39.731690       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1013 22:04:39.746176       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [73688ac34163745dfcaf8e03c5c6a54a4c91a87cb7741b6e20dcbece59db29e5] <==
	I1013 22:04:37.458545       1 server_linux.go:53] "Using iptables proxy"
	I1013 22:04:37.514553       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1013 22:04:37.614908       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1013 22:04:37.614945       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1013 22:04:37.615072       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1013 22:04:37.634241       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1013 22:04:37.634294       1 server_linux.go:132] "Using iptables Proxier"
	I1013 22:04:37.639482       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1013 22:04:37.639974       1 server.go:527] "Version info" version="v1.34.1"
	I1013 22:04:37.640026       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 22:04:37.641555       1 config.go:106] "Starting endpoint slice config controller"
	I1013 22:04:37.641574       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1013 22:04:37.641612       1 config.go:200] "Starting service config controller"
	I1013 22:04:37.641619       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1013 22:04:37.641649       1 config.go:309] "Starting node config controller"
	I1013 22:04:37.641661       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1013 22:04:37.641670       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1013 22:04:37.641673       1 config.go:403] "Starting serviceCIDR config controller"
	I1013 22:04:37.641693       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1013 22:04:37.742106       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1013 22:04:37.742118       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1013 22:04:37.742124       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [90e2257cdef169aad8152d89754d028b3f47ff10734cdbe1fc2a91ee1d85145e] <==
	I1013 22:04:35.665355       1 serving.go:386] Generated self-signed cert in-memory
	W1013 22:04:36.322532       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1013 22:04:36.322670       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found, role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found]
	W1013 22:04:36.322690       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1013 22:04:36.322701       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1013 22:04:36.343950       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1013 22:04:36.343979       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 22:04:36.347428       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 22:04:36.347478       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 22:04:36.347657       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1013 22:04:36.347972       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1013 22:04:36.447974       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 13 22:04:40 default-k8s-diff-port-505851 kubelet[719]: I1013 22:04:40.440782     719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/0db45673-8b9a-4762-9a55-139be862516b-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-k87hj\" (UID: \"0db45673-8b9a-4762-9a55-139be862516b\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k87hj"
	Oct 13 22:04:44 default-k8s-diff-port-505851 kubelet[719]: I1013 22:04:44.834168     719 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 13 22:04:45 default-k8s-diff-port-505851 kubelet[719]: I1013 22:04:45.093902     719 scope.go:117] "RemoveContainer" containerID="cf4ac7251df0d96daa1fe9582a548e71c97f763e8db76b6afece153a2be76ac4"
	Oct 13 22:04:46 default-k8s-diff-port-505851 kubelet[719]: I1013 22:04:46.098602     719 scope.go:117] "RemoveContainer" containerID="cf4ac7251df0d96daa1fe9582a548e71c97f763e8db76b6afece153a2be76ac4"
	Oct 13 22:04:46 default-k8s-diff-port-505851 kubelet[719]: I1013 22:04:46.098759     719 scope.go:117] "RemoveContainer" containerID="4f25936df743ff4c35d0faa599504b74c2e0654ccc9bf715f073dbac179b0ab8"
	Oct 13 22:04:46 default-k8s-diff-port-505851 kubelet[719]: E1013 22:04:46.098940     719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-k87hj_kubernetes-dashboard(0db45673-8b9a-4762-9a55-139be862516b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k87hj" podUID="0db45673-8b9a-4762-9a55-139be862516b"
	Oct 13 22:04:47 default-k8s-diff-port-505851 kubelet[719]: I1013 22:04:47.103859     719 scope.go:117] "RemoveContainer" containerID="4f25936df743ff4c35d0faa599504b74c2e0654ccc9bf715f073dbac179b0ab8"
	Oct 13 22:04:47 default-k8s-diff-port-505851 kubelet[719]: E1013 22:04:47.104049     719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-k87hj_kubernetes-dashboard(0db45673-8b9a-4762-9a55-139be862516b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k87hj" podUID="0db45673-8b9a-4762-9a55-139be862516b"
	Oct 13 22:04:49 default-k8s-diff-port-505851 kubelet[719]: I1013 22:04:49.124057     719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-2xpgc" podStartSLOduration=1.379476161 podStartE2EDuration="9.124031898s" podCreationTimestamp="2025-10-13 22:04:40 +0000 UTC" firstStartedPulling="2025-10-13 22:04:40.674506999 +0000 UTC m=+6.745719545" lastFinishedPulling="2025-10-13 22:04:48.419062737 +0000 UTC m=+14.490275282" observedRunningTime="2025-10-13 22:04:49.123842822 +0000 UTC m=+15.195055384" watchObservedRunningTime="2025-10-13 22:04:49.124031898 +0000 UTC m=+15.195244462"
	Oct 13 22:04:51 default-k8s-diff-port-505851 kubelet[719]: I1013 22:04:51.359062     719 scope.go:117] "RemoveContainer" containerID="4f25936df743ff4c35d0faa599504b74c2e0654ccc9bf715f073dbac179b0ab8"
	Oct 13 22:04:51 default-k8s-diff-port-505851 kubelet[719]: E1013 22:04:51.359277     719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-k87hj_kubernetes-dashboard(0db45673-8b9a-4762-9a55-139be862516b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k87hj" podUID="0db45673-8b9a-4762-9a55-139be862516b"
	Oct 13 22:05:04 default-k8s-diff-port-505851 kubelet[719]: I1013 22:05:04.031906     719 scope.go:117] "RemoveContainer" containerID="4f25936df743ff4c35d0faa599504b74c2e0654ccc9bf715f073dbac179b0ab8"
	Oct 13 22:05:04 default-k8s-diff-port-505851 kubelet[719]: I1013 22:05:04.160190     719 scope.go:117] "RemoveContainer" containerID="4f25936df743ff4c35d0faa599504b74c2e0654ccc9bf715f073dbac179b0ab8"
	Oct 13 22:05:04 default-k8s-diff-port-505851 kubelet[719]: I1013 22:05:04.160552     719 scope.go:117] "RemoveContainer" containerID="ae38e1db9769544ad8187b6bca19aaae3cebfcbaec340f2d13559004fffb61c7"
	Oct 13 22:05:04 default-k8s-diff-port-505851 kubelet[719]: E1013 22:05:04.160734     719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-k87hj_kubernetes-dashboard(0db45673-8b9a-4762-9a55-139be862516b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k87hj" podUID="0db45673-8b9a-4762-9a55-139be862516b"
	Oct 13 22:05:08 default-k8s-diff-port-505851 kubelet[719]: I1013 22:05:08.175427     719 scope.go:117] "RemoveContainer" containerID="648d22473246e720757b31210010e94963b26e5ee7e4f4e57448c809e9ec4c59"
	Oct 13 22:05:11 default-k8s-diff-port-505851 kubelet[719]: I1013 22:05:11.358808     719 scope.go:117] "RemoveContainer" containerID="ae38e1db9769544ad8187b6bca19aaae3cebfcbaec340f2d13559004fffb61c7"
	Oct 13 22:05:11 default-k8s-diff-port-505851 kubelet[719]: E1013 22:05:11.359147     719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-k87hj_kubernetes-dashboard(0db45673-8b9a-4762-9a55-139be862516b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k87hj" podUID="0db45673-8b9a-4762-9a55-139be862516b"
	Oct 13 22:05:23 default-k8s-diff-port-505851 kubelet[719]: I1013 22:05:23.029047     719 scope.go:117] "RemoveContainer" containerID="ae38e1db9769544ad8187b6bca19aaae3cebfcbaec340f2d13559004fffb61c7"
	Oct 13 22:05:23 default-k8s-diff-port-505851 kubelet[719]: E1013 22:05:23.029254     719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-k87hj_kubernetes-dashboard(0db45673-8b9a-4762-9a55-139be862516b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k87hj" podUID="0db45673-8b9a-4762-9a55-139be862516b"
	Oct 13 22:05:29 default-k8s-diff-port-505851 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 13 22:05:29 default-k8s-diff-port-505851 kubelet[719]: I1013 22:05:29.657065     719 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 13 22:05:29 default-k8s-diff-port-505851 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 13 22:05:29 default-k8s-diff-port-505851 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 13 22:05:29 default-k8s-diff-port-505851 systemd[1]: kubelet.service: Consumed 1.882s CPU time.
	
	
	==> kubernetes-dashboard [e976a19b88a83fe02afbf94aefc984bcec5775ad24483eea6e341b91a0ab5470] <==
	2025/10/13 22:04:48 Starting overwatch
	2025/10/13 22:04:48 Using namespace: kubernetes-dashboard
	2025/10/13 22:04:48 Using in-cluster config to connect to apiserver
	2025/10/13 22:04:48 Using secret token for csrf signing
	2025/10/13 22:04:48 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/13 22:04:48 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/13 22:04:48 Successful initial request to the apiserver, version: v1.34.1
	2025/10/13 22:04:48 Generating JWE encryption key
	2025/10/13 22:04:48 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/13 22:04:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/13 22:04:48 Initializing JWE encryption key from synchronized object
	2025/10/13 22:04:48 Creating in-cluster Sidecar client
	2025/10/13 22:04:48 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/13 22:04:48 Serving insecurely on HTTP port: 9090
	2025/10/13 22:05:18 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [62e1fa758a47ee529eab2178badec20856414d8ddeb60f0cc0c72ffdb14dc220] <==
	I1013 22:05:08.236783       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1013 22:05:08.247275       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1013 22:05:08.247351       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1013 22:05:08.250778       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:05:11.706338       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:05:15.966563       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:05:19.564771       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:05:22.618248       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:05:25.640811       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:05:25.646523       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1013 22:05:25.646698       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1013 22:05:25.646780       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fbdf3e78-bf34-43b3-8edf-a59e96e32243", APIVersion:"v1", ResourceVersion:"664", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-505851_808bbafc-0697-4df8-9489-3bd5acca0706 became leader
	I1013 22:05:25.646867       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-505851_808bbafc-0697-4df8-9489-3bd5acca0706!
	W1013 22:05:25.649207       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:05:25.652551       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1013 22:05:25.747472       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-505851_808bbafc-0697-4df8-9489-3bd5acca0706!
	W1013 22:05:27.655949       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:05:27.660384       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:05:29.664087       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:05:29.670067       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:05:31.673820       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:05:31.679326       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [648d22473246e720757b31210010e94963b26e5ee7e4f4e57448c809e9ec4c59] <==
	I1013 22:04:37.414738       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1013 22:05:07.419415       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-505851 -n default-k8s-diff-port-505851
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-505851 -n default-k8s-diff-port-505851: exit status 2 (446.122876ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-505851 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-505851
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-505851:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "25632f4a587b6c4ae3b8b01bcf7b68f7c63c7aa246539e4d409f57af494c93ea",
	        "Created": "2025-10-13T22:03:21.32648793Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 496321,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-13T22:04:27.264926771Z",
	            "FinishedAt": "2025-10-13T22:04:26.306907027Z"
	        },
	        "Image": "sha256:496f866fde942b6f85dc3d23428f27ed11a5cd69b522133c43b8dc97e7575c9e",
	        "ResolvConfPath": "/var/lib/docker/containers/25632f4a587b6c4ae3b8b01bcf7b68f7c63c7aa246539e4d409f57af494c93ea/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/25632f4a587b6c4ae3b8b01bcf7b68f7c63c7aa246539e4d409f57af494c93ea/hostname",
	        "HostsPath": "/var/lib/docker/containers/25632f4a587b6c4ae3b8b01bcf7b68f7c63c7aa246539e4d409f57af494c93ea/hosts",
	        "LogPath": "/var/lib/docker/containers/25632f4a587b6c4ae3b8b01bcf7b68f7c63c7aa246539e4d409f57af494c93ea/25632f4a587b6c4ae3b8b01bcf7b68f7c63c7aa246539e4d409f57af494c93ea-json.log",
	        "Name": "/default-k8s-diff-port-505851",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-505851:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-505851",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "25632f4a587b6c4ae3b8b01bcf7b68f7c63c7aa246539e4d409f57af494c93ea",
	                "LowerDir": "/var/lib/docker/overlay2/6b2a262bb341241a8ef07d2e0e2f1e5a0bf23a58ce55acefa3a22c4f42e20d7b-init/diff:/var/lib/docker/overlay2/d6236b573fee274727d414fe7dfb0718c2f1a4b8ebed995b4196b3231a8d31a4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6b2a262bb341241a8ef07d2e0e2f1e5a0bf23a58ce55acefa3a22c4f42e20d7b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6b2a262bb341241a8ef07d2e0e2f1e5a0bf23a58ce55acefa3a22c4f42e20d7b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6b2a262bb341241a8ef07d2e0e2f1e5a0bf23a58ce55acefa3a22c4f42e20d7b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-505851",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-505851/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-505851",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-505851",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-505851",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b392a435e3d34a1ae8ae6d1c0a26da3f0ee9cd91541afcb3c83dd2102371e080",
	            "SandboxKey": "/var/run/docker/netns/b392a435e3d3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-505851": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ea:2a:d0:7c:24:a2",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "bd127c16ad9414037a41fda45a58cf82e4113c81cfa569a1b9f2b3db8c366a7a",
	                    "EndpointID": "ec7bdda28dc0a1b46e50c076a2795a88c9bec2e6757da79bb91b6a482bbfebf0",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-505851",
	                        "25632f4a587b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-505851 -n default-k8s-diff-port-505851
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-505851 -n default-k8s-diff-port-505851: exit status 2 (382.851336ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-505851 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-505851 logs -n 25: (1.353471717s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                          ARGS                                                                          │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p embed-certs-521669 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ embed-certs-521669           │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │                     │
	│ ssh     │ -p auto-200102 sudo cat /var/lib/kubelet/config.yaml                                                                                                   │ auto-200102                  │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │ 13 Oct 25 22:04 UTC │
	│ ssh     │ -p auto-200102 sudo systemctl status docker --all --full --no-pager                                                                                    │ auto-200102                  │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │                     │
	│ ssh     │ -p auto-200102 sudo systemctl cat docker --no-pager                                                                                                    │ auto-200102                  │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │ 13 Oct 25 22:04 UTC │
	│ ssh     │ -p auto-200102 sudo cat /etc/docker/daemon.json                                                                                                        │ auto-200102                  │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │                     │
	│ ssh     │ -p auto-200102 sudo docker system info                                                                                                                 │ auto-200102                  │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │                     │
	│ ssh     │ -p auto-200102 sudo systemctl status cri-docker --all --full --no-pager                                                                                │ auto-200102                  │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │                     │
	│ ssh     │ -p auto-200102 sudo systemctl cat cri-docker --no-pager                                                                                                │ auto-200102                  │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │ 13 Oct 25 22:04 UTC │
	│ ssh     │ -p auto-200102 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                           │ auto-200102                  │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │                     │
	│ ssh     │ -p auto-200102 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                     │ auto-200102                  │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │ 13 Oct 25 22:04 UTC │
	│ ssh     │ -p auto-200102 sudo cri-dockerd --version                                                                                                              │ auto-200102                  │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │ 13 Oct 25 22:04 UTC │
	│ ssh     │ -p auto-200102 sudo systemctl status containerd --all --full --no-pager                                                                                │ auto-200102                  │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │                     │
	│ ssh     │ -p auto-200102 sudo systemctl cat containerd --no-pager                                                                                                │ auto-200102                  │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │ 13 Oct 25 22:04 UTC │
	│ ssh     │ -p auto-200102 sudo cat /lib/systemd/system/containerd.service                                                                                         │ auto-200102                  │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │ 13 Oct 25 22:04 UTC │
	│ ssh     │ -p auto-200102 sudo cat /etc/containerd/config.toml                                                                                                    │ auto-200102                  │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │ 13 Oct 25 22:04 UTC │
	│ ssh     │ -p auto-200102 sudo containerd config dump                                                                                                             │ auto-200102                  │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │ 13 Oct 25 22:04 UTC │
	│ ssh     │ -p auto-200102 sudo systemctl status crio --all --full --no-pager                                                                                      │ auto-200102                  │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │ 13 Oct 25 22:04 UTC │
	│ ssh     │ -p auto-200102 sudo systemctl cat crio --no-pager                                                                                                      │ auto-200102                  │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │ 13 Oct 25 22:04 UTC │
	│ ssh     │ -p auto-200102 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                            │ auto-200102                  │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │ 13 Oct 25 22:04 UTC │
	│ ssh     │ -p auto-200102 sudo crio config                                                                                                                        │ auto-200102                  │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │ 13 Oct 25 22:04 UTC │
	│ delete  │ -p auto-200102                                                                                                                                         │ auto-200102                  │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │ 13 Oct 25 22:04 UTC │
	│ start   │ -p calico-200102 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio                 │ calico-200102                │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │                     │
	│ image   │ default-k8s-diff-port-505851 image list --format=json                                                                                                  │ default-k8s-diff-port-505851 │ jenkins │ v1.37.0 │ 13 Oct 25 22:05 UTC │ 13 Oct 25 22:05 UTC │
	│ pause   │ -p default-k8s-diff-port-505851 --alsologtostderr -v=1                                                                                                 │ default-k8s-diff-port-505851 │ jenkins │ v1.37.0 │ 13 Oct 25 22:05 UTC │                     │
	│ ssh     │ -p kindnet-200102 pgrep -a kubelet                                                                                                                     │ kindnet-200102               │ jenkins │ v1.37.0 │ 13 Oct 25 22:05 UTC │ 13 Oct 25 22:05 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 22:04:57
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 22:04:57.600029  510068 out.go:360] Setting OutFile to fd 1 ...
	I1013 22:04:57.600363  510068 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:04:57.600376  510068 out.go:374] Setting ErrFile to fd 2...
	I1013 22:04:57.600383  510068 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:04:57.600686  510068 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-226873/.minikube/bin
	I1013 22:04:57.601373  510068 out.go:368] Setting JSON to false
	I1013 22:04:57.603077  510068 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6446,"bootTime":1760386652,"procs":345,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1013 22:04:57.603224  510068 start.go:141] virtualization: kvm guest
	I1013 22:04:57.605562  510068 out.go:179] * [calico-200102] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1013 22:04:57.606924  510068 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 22:04:57.606962  510068 notify.go:220] Checking for updates...
	I1013 22:04:57.609488  510068 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 22:04:57.611204  510068 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-226873/kubeconfig
	I1013 22:04:57.612626  510068 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-226873/.minikube
	I1013 22:04:57.614039  510068 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1013 22:04:57.615575  510068 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 22:04:57.617486  510068 config.go:182] Loaded profile config "default-k8s-diff-port-505851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:04:57.617584  510068 config.go:182] Loaded profile config "embed-certs-521669": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:04:57.617657  510068 config.go:182] Loaded profile config "kindnet-200102": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:04:57.617767  510068 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 22:04:57.661729  510068 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1013 22:04:57.661828  510068 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 22:04:57.735632  510068 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-13 22:04:57.720460503 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652166656 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1013 22:04:57.735770  510068 docker.go:318] overlay module found
	I1013 22:04:57.737869  510068 out.go:179] * Using the docker driver based on user configuration
	I1013 22:04:57.740052  510068 start.go:305] selected driver: docker
	I1013 22:04:57.740074  510068 start.go:925] validating driver "docker" against <nil>
	I1013 22:04:57.740090  510068 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 22:04:57.740774  510068 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 22:04:57.851378  510068 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-13 22:04:57.831976045 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652166656 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1013 22:04:57.851597  510068 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1013 22:04:57.851888  510068 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 22:04:57.853765  510068 out.go:179] * Using Docker driver with root privileges
	I1013 22:04:57.855220  510068 cni.go:84] Creating CNI manager for "calico"
	I1013 22:04:57.855243  510068 start_flags.go:336] Found "Calico" CNI - setting NetworkPlugin=cni
	I1013 22:04:57.855333  510068 start.go:349] cluster config:
	{Name:calico-200102 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-200102 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s}
	I1013 22:04:57.857287  510068 out.go:179] * Starting "calico-200102" primary control-plane node in "calico-200102" cluster
	I1013 22:04:57.858749  510068 cache.go:123] Beginning downloading kic base image for docker with crio
	I1013 22:04:57.860390  510068 out.go:179] * Pulling base image v0.0.48-1760363564-21724 ...
	I1013 22:04:57.574938  501664 out.go:252]   - Booting up control plane ...
	I1013 22:04:57.575078  501664 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1013 22:04:57.575169  501664 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1013 22:04:57.575758  501664 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1013 22:04:57.593841  501664 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1013 22:04:57.593966  501664 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1013 22:04:57.601352  501664 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1013 22:04:57.601610  501664 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1013 22:04:57.601769  501664 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1013 22:04:57.727000  501664 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1013 22:04:57.727199  501664 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1013 22:04:57.861788  510068 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 22:04:57.861835  510068 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-226873/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1013 22:04:57.861853  510068 cache.go:58] Caching tarball of preloaded images
	I1013 22:04:57.861910  510068 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon
	I1013 22:04:57.861985  510068 preload.go:233] Found /home/jenkins/minikube-integration/21724-226873/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1013 22:04:57.862009  510068 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1013 22:04:57.862127  510068 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/calico-200102/config.json ...
	I1013 22:04:57.862151  510068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/calico-200102/config.json: {Name:mkdc0a5acfca7b93aaa4869933063bc1ca23a4a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:04:57.886320  510068 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon, skipping pull
	I1013 22:04:57.886366  510068 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 exists in daemon, skipping load
	I1013 22:04:57.886389  510068 cache.go:232] Successfully downloaded all kic artifacts
	I1013 22:04:57.886417  510068 start.go:360] acquireMachinesLock for calico-200102: {Name:mk9e164b65ac945058cc8fdff0a6b7c974929130 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 22:04:57.886526  510068 start.go:364] duration metric: took 89.282µs to acquireMachinesLock for "calico-200102"
	I1013 22:04:57.886562  510068 start.go:93] Provisioning new machine with config: &{Name:calico-200102 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-200102 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1013 22:04:57.886665  510068 start.go:125] createHost starting for "" (driver="docker")
	I1013 22:04:56.138079  505109 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1013 22:04:56.138105  505109 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1013 22:04:56.138174  505109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-521669
	I1013 22:04:56.173583  505109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/embed-certs-521669/id_rsa Username:docker}
	I1013 22:04:56.175158  505109 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1013 22:04:56.175245  505109 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1013 22:04:56.175343  505109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-521669
	I1013 22:04:56.177076  505109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/embed-certs-521669/id_rsa Username:docker}
	I1013 22:04:56.205368  505109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/embed-certs-521669/id_rsa Username:docker}
	I1013 22:04:56.269491  505109 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 22:04:56.285560  505109 node_ready.go:35] waiting up to 6m0s for node "embed-certs-521669" to be "Ready" ...
	I1013 22:04:56.298084  505109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 22:04:56.299573  505109 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1013 22:04:56.299594  505109 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1013 22:04:56.325199  505109 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1013 22:04:56.325227  505109 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1013 22:04:56.332701  505109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1013 22:04:56.350946  505109 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1013 22:04:56.350987  505109 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1013 22:04:56.374977  505109 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1013 22:04:56.375047  505109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1013 22:04:56.395083  505109 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1013 22:04:56.395113  505109 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1013 22:04:56.413947  505109 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1013 22:04:56.413978  505109 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1013 22:04:56.434377  505109 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1013 22:04:56.434404  505109 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1013 22:04:56.461625  505109 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1013 22:04:56.461670  505109 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1013 22:04:56.485455  505109 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1013 22:04:56.485486  505109 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1013 22:04:56.505644  505109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1013 22:04:57.933854  505109 node_ready.go:49] node "embed-certs-521669" is "Ready"
	I1013 22:04:57.933895  505109 node_ready.go:38] duration metric: took 1.64829365s for node "embed-certs-521669" to be "Ready" ...
	I1013 22:04:57.933912  505109 api_server.go:52] waiting for apiserver process to appear ...
	I1013 22:04:57.933963  505109 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 22:04:58.556759  505109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.258633174s)
	I1013 22:04:58.556772  505109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.2239942s)
	I1013 22:04:58.557101  505109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.051390315s)
	I1013 22:04:58.557115  505109 api_server.go:72] duration metric: took 2.4551482s to wait for apiserver process to appear ...
	I1013 22:04:58.557132  505109 api_server.go:88] waiting for apiserver healthz status ...
	I1013 22:04:58.557152  505109 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1013 22:04:58.561933  505109 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-521669 addons enable metrics-server
	
	I1013 22:04:58.565714  505109 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1013 22:04:58.565742  505109 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1013 22:04:58.573259  505109 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1013 22:04:57.965374  496036 pod_ready.go:104] pod "coredns-66bc5c9577-5x8dn" is not "Ready", error: <nil>
	W1013 22:05:00.457661  496036 pod_ready.go:104] pod "coredns-66bc5c9577-5x8dn" is not "Ready", error: <nil>
	I1013 22:04:57.893491  510068 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1013 22:04:57.893863  510068 start.go:159] libmachine.API.Create for "calico-200102" (driver="docker")
	I1013 22:04:57.893898  510068 client.go:168] LocalClient.Create starting
	I1013 22:04:57.894004  510068 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem
	I1013 22:04:57.894047  510068 main.go:141] libmachine: Decoding PEM data...
	I1013 22:04:57.894069  510068 main.go:141] libmachine: Parsing certificate...
	I1013 22:04:57.894139  510068 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21724-226873/.minikube/certs/cert.pem
	I1013 22:04:57.894163  510068 main.go:141] libmachine: Decoding PEM data...
	I1013 22:04:57.894174  510068 main.go:141] libmachine: Parsing certificate...
	I1013 22:04:57.894618  510068 cli_runner.go:164] Run: docker network inspect calico-200102 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1013 22:04:57.932742  510068 cli_runner.go:211] docker network inspect calico-200102 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1013 22:04:57.932943  510068 network_create.go:284] running [docker network inspect calico-200102] to gather additional debugging logs...
	I1013 22:04:57.933011  510068 cli_runner.go:164] Run: docker network inspect calico-200102
	W1013 22:04:57.966457  510068 cli_runner.go:211] docker network inspect calico-200102 returned with exit code 1
	I1013 22:04:57.966488  510068 network_create.go:287] error running [docker network inspect calico-200102]: docker network inspect calico-200102: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network calico-200102 not found
	I1013 22:04:57.966612  510068 network_create.go:289] output of [docker network inspect calico-200102]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network calico-200102 not found
	
	** /stderr **
	I1013 22:04:57.966792  510068 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 22:04:57.993632  510068 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7d83a8e6a805 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ea:69:47:54:f9:98} reservation:<nil>}
	I1013 22:04:57.994725  510068 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-35c0cecee577 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:f2:41:bc:f8:12:32} reservation:<nil>}
	I1013 22:04:57.995703  510068 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-2e951fbeb08e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ba:fb:be:51:da:97} reservation:<nil>}
	I1013 22:04:57.996510  510068 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-bd127c16ad94 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:da:91:d2:e9:26:c1} reservation:<nil>}
	I1013 22:04:57.997762  510068 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f86150}
	I1013 22:04:57.997836  510068 network_create.go:124] attempt to create docker network calico-200102 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1013 22:04:57.997934  510068 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-200102 calico-200102
	I1013 22:04:58.086657  510068 network_create.go:108] docker network calico-200102 192.168.85.0/24 created
	I1013 22:04:58.086806  510068 kic.go:121] calculated static IP "192.168.85.2" for the "calico-200102" container
	I1013 22:04:58.086964  510068 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1013 22:04:58.111796  510068 cli_runner.go:164] Run: docker volume create calico-200102 --label name.minikube.sigs.k8s.io=calico-200102 --label created_by.minikube.sigs.k8s.io=true
	I1013 22:04:58.138960  510068 oci.go:103] Successfully created a docker volume calico-200102
	I1013 22:04:58.139068  510068 cli_runner.go:164] Run: docker run --rm --name calico-200102-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-200102 --entrypoint /usr/bin/test -v calico-200102:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -d /var/lib
	I1013 22:04:58.653539  510068 oci.go:107] Successfully prepared a docker volume calico-200102
	I1013 22:04:58.653589  510068 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 22:04:58.653613  510068 kic.go:194] Starting extracting preloaded images to volume ...
	I1013 22:04:58.653677  510068 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21724-226873/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v calico-200102:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -I lz4 -xf /preloaded.tar -C /extractDir
	I1013 22:04:58.229500  501664 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 502.536221ms
	I1013 22:04:58.235372  501664 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1013 22:04:58.235498  501664 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1013 22:04:58.235614  501664 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1013 22:04:58.235712  501664 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1013 22:04:59.754874  501664 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.519620412s
	I1013 22:05:00.927012  501664 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.691843323s
	I1013 22:04:58.574864  505109 addons.go:514] duration metric: took 2.47282637s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1013 22:04:59.058167  505109 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1013 22:04:59.064872  505109 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1013 22:04:59.064902  505109 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1013 22:04:59.558137  505109 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1013 22:04:59.563643  505109 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1013 22:04:59.564836  505109 api_server.go:141] control plane version: v1.34.1
	I1013 22:04:59.564861  505109 api_server.go:131] duration metric: took 1.00772189s to wait for apiserver health ...
	I1013 22:04:59.564872  505109 system_pods.go:43] waiting for kube-system pods to appear ...
	I1013 22:04:59.568145  505109 system_pods.go:59] 8 kube-system pods found
	I1013 22:04:59.568188  505109 system_pods.go:61] "coredns-66bc5c9577-kzq9t" [de4a6bd9-ffde-4056-a47b-41dd5db09e0f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 22:04:59.568199  505109 system_pods.go:61] "etcd-embed-certs-521669" [cef194ff-ec06-48fc-8b99-a25838ea9dd8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1013 22:04:59.568206  505109 system_pods.go:61] "kindnet-rqr6b" [83ca9459-7636-4391-814b-274ff7e06bc7] Running
	I1013 22:04:59.568215  505109 system_pods.go:61] "kube-apiserver-embed-certs-521669" [80c8fec4-c979-4c91-a725-ee41f5f0aab0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1013 22:04:59.568229  505109 system_pods.go:61] "kube-controller-manager-embed-certs-521669" [326549fc-7a4b-4837-959d-eaa1c069b89a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1013 22:04:59.568234  505109 system_pods.go:61] "kube-proxy-jjzrs" [511ca726-6516-4c5b-8bb4-f76d6e83ef94] Running
	I1013 22:04:59.568243  505109 system_pods.go:61] "kube-scheduler-embed-certs-521669" [d91d80ae-c8fe-4eaf-b383-05b7202992d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1013 22:04:59.568253  505109 system_pods.go:61] "storage-provisioner" [9c70ca0c-a52a-43a0-8221-0c1ecd43c72a] Running
	I1013 22:04:59.568261  505109 system_pods.go:74] duration metric: took 3.38231ms to wait for pod list to return data ...
	I1013 22:04:59.568270  505109 default_sa.go:34] waiting for default service account to be created ...
	I1013 22:04:59.572390  505109 default_sa.go:45] found service account: "default"
	I1013 22:04:59.572426  505109 default_sa.go:55] duration metric: took 4.148403ms for default service account to be created ...
	I1013 22:04:59.572439  505109 system_pods.go:116] waiting for k8s-apps to be running ...
	I1013 22:04:59.575494  505109 system_pods.go:86] 8 kube-system pods found
	I1013 22:04:59.575530  505109 system_pods.go:89] "coredns-66bc5c9577-kzq9t" [de4a6bd9-ffde-4056-a47b-41dd5db09e0f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 22:04:59.575540  505109 system_pods.go:89] "etcd-embed-certs-521669" [cef194ff-ec06-48fc-8b99-a25838ea9dd8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1013 22:04:59.575547  505109 system_pods.go:89] "kindnet-rqr6b" [83ca9459-7636-4391-814b-274ff7e06bc7] Running
	I1013 22:04:59.575556  505109 system_pods.go:89] "kube-apiserver-embed-certs-521669" [80c8fec4-c979-4c91-a725-ee41f5f0aab0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1013 22:04:59.575564  505109 system_pods.go:89] "kube-controller-manager-embed-certs-521669" [326549fc-7a4b-4837-959d-eaa1c069b89a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1013 22:04:59.575573  505109 system_pods.go:89] "kube-proxy-jjzrs" [511ca726-6516-4c5b-8bb4-f76d6e83ef94] Running
	I1013 22:04:59.575581  505109 system_pods.go:89] "kube-scheduler-embed-certs-521669" [d91d80ae-c8fe-4eaf-b383-05b7202992d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1013 22:04:59.575593  505109 system_pods.go:89] "storage-provisioner" [9c70ca0c-a52a-43a0-8221-0c1ecd43c72a] Running
	I1013 22:04:59.575602  505109 system_pods.go:126] duration metric: took 3.157141ms to wait for k8s-apps to be running ...
	I1013 22:04:59.575614  505109 system_svc.go:44] waiting for kubelet service to be running ....
	I1013 22:04:59.575666  505109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 22:04:59.592023  505109 system_svc.go:56] duration metric: took 16.397591ms WaitForService to wait for kubelet
	I1013 22:04:59.592060  505109 kubeadm.go:586] duration metric: took 3.49009411s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 22:04:59.592084  505109 node_conditions.go:102] verifying NodePressure condition ...
	I1013 22:04:59.595686  505109 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1013 22:04:59.595717  505109 node_conditions.go:123] node cpu capacity is 8
	I1013 22:04:59.595736  505109 node_conditions.go:105] duration metric: took 3.643199ms to run NodePressure ...
	I1013 22:04:59.595750  505109 start.go:241] waiting for startup goroutines ...
	I1013 22:04:59.595759  505109 start.go:246] waiting for cluster config update ...
	I1013 22:04:59.595775  505109 start.go:255] writing updated cluster config ...
	I1013 22:04:59.596088  505109 ssh_runner.go:195] Run: rm -f paused
	I1013 22:04:59.601136  505109 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 22:04:59.605173  505109 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-kzq9t" in "kube-system" namespace to be "Ready" or be gone ...
	W1013 22:05:01.620658  505109 pod_ready.go:104] pod "coredns-66bc5c9577-kzq9t" is not "Ready", error: <nil>
	I1013 22:05:04.737279  501664 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.501973484s
	I1013 22:05:04.751425  501664 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1013 22:05:04.766118  501664 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1013 22:05:04.783549  501664 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1013 22:05:04.783841  501664 kubeadm.go:318] [mark-control-plane] Marking the node kindnet-200102 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1013 22:05:04.794476  501664 kubeadm.go:318] [bootstrap-token] Using token: oz3cya.f6fitoruhpb1tvw4
	I1013 22:05:04.796491  501664 out.go:252]   - Configuring RBAC rules ...
	I1013 22:05:04.796630  501664 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1013 22:05:04.800695  501664 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1013 22:05:04.808046  501664 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1013 22:05:04.811159  501664 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1013 22:05:04.814269  501664 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1013 22:05:04.818421  501664 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1013 22:05:05.144916  501664 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1013 22:05:05.721680  501664 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1013 22:05:06.145738  501664 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1013 22:05:06.146653  501664 kubeadm.go:318] 
	I1013 22:05:06.146773  501664 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1013 22:05:06.146787  501664 kubeadm.go:318] 
	I1013 22:05:06.146894  501664 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1013 22:05:06.146917  501664 kubeadm.go:318] 
	I1013 22:05:06.146957  501664 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1013 22:05:06.147128  501664 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1013 22:05:06.147218  501664 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1013 22:05:06.147229  501664 kubeadm.go:318] 
	I1013 22:05:06.147331  501664 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1013 22:05:06.147347  501664 kubeadm.go:318] 
	I1013 22:05:06.147405  501664 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1013 22:05:06.147540  501664 kubeadm.go:318] 
	I1013 22:05:06.147626  501664 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1013 22:05:06.147753  501664 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1013 22:05:06.147870  501664 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1013 22:05:06.147888  501664 kubeadm.go:318] 
	I1013 22:05:06.148036  501664 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1013 22:05:06.148107  501664 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1013 22:05:06.148113  501664 kubeadm.go:318] 
	I1013 22:05:06.148186  501664 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token oz3cya.f6fitoruhpb1tvw4 \
	I1013 22:05:06.148297  501664 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:00bf5d6d0f4ef7bd9334b23e90d8fd2b7e452995fe6a96d4fd7aebd6540a9956 \
	I1013 22:05:06.148336  501664 kubeadm.go:318] 	--control-plane 
	I1013 22:05:06.148345  501664 kubeadm.go:318] 
	I1013 22:05:06.148479  501664 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1013 22:05:06.148496  501664 kubeadm.go:318] 
	I1013 22:05:06.148598  501664 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token oz3cya.f6fitoruhpb1tvw4 \
	I1013 22:05:06.148755  501664 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:00bf5d6d0f4ef7bd9334b23e90d8fd2b7e452995fe6a96d4fd7aebd6540a9956 
	I1013 22:05:06.151600  501664 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1013 22:05:06.151768  501664 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1013 22:05:06.151799  501664 cni.go:84] Creating CNI manager for "kindnet"
	I1013 22:05:06.153343  501664 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1013 22:05:02.958248  496036 pod_ready.go:104] pod "coredns-66bc5c9577-5x8dn" is not "Ready", error: <nil>
	W1013 22:05:05.460117  496036 pod_ready.go:104] pod "coredns-66bc5c9577-5x8dn" is not "Ready", error: <nil>
	I1013 22:05:03.680421  510068 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21724-226873/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v calico-200102:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -I lz4 -xf /preloaded.tar -C /extractDir: (5.026701488s)
	I1013 22:05:03.680463  510068 kic.go:203] duration metric: took 5.026844342s to extract preloaded images to volume ...
	W1013 22:05:03.680576  510068 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1013 22:05:03.680611  510068 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1013 22:05:03.680664  510068 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1013 22:05:03.769831  510068 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-200102 --name calico-200102 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-200102 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-200102 --network calico-200102 --ip 192.168.85.2 --volume calico-200102:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225
	I1013 22:05:04.202138  510068 cli_runner.go:164] Run: docker container inspect calico-200102 --format={{.State.Running}}
	I1013 22:05:04.228646  510068 cli_runner.go:164] Run: docker container inspect calico-200102 --format={{.State.Status}}
	I1013 22:05:04.257138  510068 cli_runner.go:164] Run: docker exec calico-200102 stat /var/lib/dpkg/alternatives/iptables
	I1013 22:05:04.315847  510068 oci.go:144] the created container "calico-200102" has a running status.
	I1013 22:05:04.316069  510068 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21724-226873/.minikube/machines/calico-200102/id_rsa...
	I1013 22:05:04.559979  510068 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21724-226873/.minikube/machines/calico-200102/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1013 22:05:04.592619  510068 cli_runner.go:164] Run: docker container inspect calico-200102 --format={{.State.Status}}
	I1013 22:05:04.616563  510068 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1013 22:05:04.616586  510068 kic_runner.go:114] Args: [docker exec --privileged calico-200102 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1013 22:05:04.669410  510068 cli_runner.go:164] Run: docker container inspect calico-200102 --format={{.State.Status}}
	I1013 22:05:04.691271  510068 machine.go:93] provisionDockerMachine start ...
	I1013 22:05:04.691468  510068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-200102
	I1013 22:05:04.714849  510068 main.go:141] libmachine: Using SSH client type: native
	I1013 22:05:04.715219  510068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1013 22:05:04.715240  510068 main.go:141] libmachine: About to run SSH command:
	hostname
	I1013 22:05:04.715987  510068 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59950->127.0.0.1:33113: read: connection reset by peer
	I1013 22:05:06.154402  501664 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1013 22:05:06.159850  501664 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1013 22:05:06.159868  501664 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1013 22:05:06.175311  501664 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1013 22:05:06.429809  501664 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1013 22:05:06.429943  501664 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:05:06.429979  501664 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kindnet-200102 minikube.k8s.io/updated_at=2025_10_13T22_05_06_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=18273cc699fc357fbf4f93654efe4966698a9f22 minikube.k8s.io/name=kindnet-200102 minikube.k8s.io/primary=true
	I1013 22:05:06.443204  501664 ops.go:34] apiserver oom_adj: -16
	I1013 22:05:06.532903  501664 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:05:07.033717  501664 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:05:07.533552  501664 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1013 22:05:03.648769  505109 pod_ready.go:104] pod "coredns-66bc5c9577-kzq9t" is not "Ready", error: <nil>
	W1013 22:05:06.111651  505109 pod_ready.go:104] pod "coredns-66bc5c9577-kzq9t" is not "Ready", error: <nil>
	W1013 22:05:08.113860  505109 pod_ready.go:104] pod "coredns-66bc5c9577-kzq9t" is not "Ready", error: <nil>
	I1013 22:05:08.033818  501664 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:05:08.533976  501664 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:05:09.033818  501664 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:05:09.533800  501664 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:05:10.033708  501664 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:05:10.533605  501664 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:05:11.033122  501664 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:05:11.117150  501664 kubeadm.go:1113] duration metric: took 4.687327675s to wait for elevateKubeSystemPrivileges
	I1013 22:05:11.117196  501664 kubeadm.go:402] duration metric: took 19.137678587s to StartCluster
	I1013 22:05:11.117220  501664 settings.go:142] acquiring lock: {Name:mk13008e3b2fce0e368bddbf00d43b8340210d33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:05:11.117303  501664 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-226873/kubeconfig
	I1013 22:05:11.119575  501664 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/kubeconfig: {Name:mk2f336b13d09ff6e6da9e86905651541ce51ca0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:05:11.119920  501664 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1013 22:05:11.120076  501664 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1013 22:05:11.120248  501664 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1013 22:05:11.120356  501664 config.go:182] Loaded profile config "kindnet-200102": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:05:11.120361  501664 addons.go:69] Setting storage-provisioner=true in profile "kindnet-200102"
	I1013 22:05:11.120380  501664 addons.go:238] Setting addon storage-provisioner=true in "kindnet-200102"
	I1013 22:05:11.120406  501664 addons.go:69] Setting default-storageclass=true in profile "kindnet-200102"
	I1013 22:05:11.120413  501664 host.go:66] Checking if "kindnet-200102" exists ...
	I1013 22:05:11.120426  501664 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kindnet-200102"
	I1013 22:05:11.120816  501664 cli_runner.go:164] Run: docker container inspect kindnet-200102 --format={{.State.Status}}
	I1013 22:05:11.120984  501664 cli_runner.go:164] Run: docker container inspect kindnet-200102 --format={{.State.Status}}
	I1013 22:05:11.121297  501664 out.go:179] * Verifying Kubernetes components...
	I1013 22:05:11.123542  501664 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:05:11.153698  501664 addons.go:238] Setting addon default-storageclass=true in "kindnet-200102"
	I1013 22:05:11.153749  501664 host.go:66] Checking if "kindnet-200102" exists ...
	I1013 22:05:11.154230  501664 cli_runner.go:164] Run: docker container inspect kindnet-200102 --format={{.State.Status}}
	I1013 22:05:11.160303  501664 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1013 22:05:11.178414  501664 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1013 22:05:11.178447  501664 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1013 22:05:11.178515  501664 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-200102
	I1013 22:05:11.183749  501664 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 22:05:11.183777  501664 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1013 22:05:11.183842  501664 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-200102
	I1013 22:05:11.211405  501664 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/kindnet-200102/id_rsa Username:docker}
	I1013 22:05:11.216460  501664 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/kindnet-200102/id_rsa Username:docker}
	I1013 22:05:11.289434  501664 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1013 22:05:11.299130  501664 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 22:05:11.330360  501664 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1013 22:05:11.342206  501664 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 22:05:11.551930  501664 start.go:976] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1013 22:05:11.553616  501664 node_ready.go:35] waiting up to 15m0s for node "kindnet-200102" to be "Ready" ...
	I1013 22:05:11.811570  501664 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	W1013 22:05:07.959098  496036 pod_ready.go:104] pod "coredns-66bc5c9577-5x8dn" is not "Ready", error: <nil>
	W1013 22:05:10.458521  496036 pod_ready.go:104] pod "coredns-66bc5c9577-5x8dn" is not "Ready", error: <nil>
	I1013 22:05:07.873540  510068 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-200102
	
	I1013 22:05:07.873574  510068 ubuntu.go:182] provisioning hostname "calico-200102"
	I1013 22:05:07.873669  510068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-200102
	I1013 22:05:07.897536  510068 main.go:141] libmachine: Using SSH client type: native
	I1013 22:05:07.897835  510068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1013 22:05:07.897857  510068 main.go:141] libmachine: About to run SSH command:
	sudo hostname calico-200102 && echo "calico-200102" | sudo tee /etc/hostname
	I1013 22:05:08.066880  510068 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-200102
	
	I1013 22:05:08.067002  510068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-200102
	I1013 22:05:08.090660  510068 main.go:141] libmachine: Using SSH client type: native
	I1013 22:05:08.091020  510068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1013 22:05:08.091052  510068 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-200102' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-200102/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-200102' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1013 22:05:08.247648  510068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 22:05:08.247751  510068 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21724-226873/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-226873/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-226873/.minikube}
	I1013 22:05:08.247803  510068 ubuntu.go:190] setting up certificates
	I1013 22:05:08.247816  510068 provision.go:84] configureAuth start
	I1013 22:05:08.247882  510068 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-200102
	I1013 22:05:08.268038  510068 provision.go:143] copyHostCerts
	I1013 22:05:08.268142  510068 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-226873/.minikube/ca.pem, removing ...
	I1013 22:05:08.268156  510068 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-226873/.minikube/ca.pem
	I1013 22:05:08.268249  510068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-226873/.minikube/ca.pem (1078 bytes)
	I1013 22:05:08.268390  510068 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-226873/.minikube/cert.pem, removing ...
	I1013 22:05:08.268404  510068 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-226873/.minikube/cert.pem
	I1013 22:05:08.268451  510068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-226873/.minikube/cert.pem (1123 bytes)
	I1013 22:05:08.268547  510068 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-226873/.minikube/key.pem, removing ...
	I1013 22:05:08.268561  510068 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-226873/.minikube/key.pem
	I1013 22:05:08.268601  510068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-226873/.minikube/key.pem (1679 bytes)
	I1013 22:05:08.268763  510068 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-226873/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca-key.pem org=jenkins.calico-200102 san=[127.0.0.1 192.168.85.2 calico-200102 localhost minikube]
	I1013 22:05:09.222116  510068 provision.go:177] copyRemoteCerts
	I1013 22:05:09.222210  510068 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1013 22:05:09.222264  510068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-200102
	I1013 22:05:09.245308  510068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/calico-200102/id_rsa Username:docker}
	I1013 22:05:09.358834  510068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1013 22:05:09.386380  510068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1013 22:05:09.410631  510068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1013 22:05:09.435966  510068 provision.go:87] duration metric: took 1.188131204s to configureAuth
	I1013 22:05:09.436012  510068 ubuntu.go:206] setting minikube options for container-runtime
	I1013 22:05:09.436221  510068 config.go:182] Loaded profile config "calico-200102": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:05:09.436360  510068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-200102
	I1013 22:05:09.462663  510068 main.go:141] libmachine: Using SSH client type: native
	I1013 22:05:09.462968  510068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1013 22:05:09.463026  510068 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1013 22:05:09.750125  510068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1013 22:05:09.750153  510068 machine.go:96] duration metric: took 5.058859508s to provisionDockerMachine
	I1013 22:05:09.750167  510068 client.go:171] duration metric: took 11.856262633s to LocalClient.Create
	I1013 22:05:09.750191  510068 start.go:167] duration metric: took 11.856329479s to libmachine.API.Create "calico-200102"
	I1013 22:05:09.750204  510068 start.go:293] postStartSetup for "calico-200102" (driver="docker")
	I1013 22:05:09.750218  510068 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1013 22:05:09.750291  510068 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1013 22:05:09.750357  510068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-200102
	I1013 22:05:09.770290  510068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/calico-200102/id_rsa Username:docker}
	I1013 22:05:09.882791  510068 ssh_runner.go:195] Run: cat /etc/os-release
	I1013 22:05:09.887101  510068 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1013 22:05:09.887130  510068 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1013 22:05:09.887144  510068 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-226873/.minikube/addons for local assets ...
	I1013 22:05:09.887199  510068 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-226873/.minikube/files for local assets ...
	I1013 22:05:09.887291  510068 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-226873/.minikube/files/etc/ssl/certs/2309292.pem -> 2309292.pem in /etc/ssl/certs
	I1013 22:05:09.887409  510068 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1013 22:05:09.895918  510068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/files/etc/ssl/certs/2309292.pem --> /etc/ssl/certs/2309292.pem (1708 bytes)
	I1013 22:05:09.923938  510068 start.go:296] duration metric: took 173.715275ms for postStartSetup
	I1013 22:05:09.924415  510068 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-200102
	I1013 22:05:09.943419  510068 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/calico-200102/config.json ...
	I1013 22:05:09.943827  510068 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1013 22:05:09.943885  510068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-200102
	I1013 22:05:09.964219  510068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/calico-200102/id_rsa Username:docker}
	I1013 22:05:10.063280  510068 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1013 22:05:10.068549  510068 start.go:128] duration metric: took 12.181865397s to createHost
	I1013 22:05:10.068576  510068 start.go:83] releasing machines lock for "calico-200102", held for 12.182034083s
	I1013 22:05:10.068644  510068 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-200102
	I1013 22:05:10.087861  510068 ssh_runner.go:195] Run: cat /version.json
	I1013 22:05:10.087890  510068 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1013 22:05:10.087924  510068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-200102
	I1013 22:05:10.087979  510068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-200102
	I1013 22:05:10.110561  510068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/calico-200102/id_rsa Username:docker}
	I1013 22:05:10.110930  510068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/calico-200102/id_rsa Username:docker}
	I1013 22:05:10.283865  510068 ssh_runner.go:195] Run: systemctl --version
	I1013 22:05:10.291080  510068 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1013 22:05:10.330093  510068 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1013 22:05:10.335465  510068 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1013 22:05:10.335549  510068 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1013 22:05:10.364854  510068 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1013 22:05:10.364883  510068 start.go:495] detecting cgroup driver to use...
	I1013 22:05:10.364929  510068 detect.go:190] detected "systemd" cgroup driver on host os
	I1013 22:05:10.365017  510068 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1013 22:05:10.382335  510068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1013 22:05:10.395980  510068 docker.go:218] disabling cri-docker service (if available) ...
	I1013 22:05:10.396157  510068 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1013 22:05:10.414935  510068 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1013 22:05:10.437151  510068 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1013 22:05:10.532226  510068 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1013 22:05:10.636456  510068 docker.go:234] disabling docker service ...
	I1013 22:05:10.636527  510068 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1013 22:05:10.658108  510068 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1013 22:05:10.675059  510068 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1013 22:05:10.778198  510068 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1013 22:05:10.874890  510068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1013 22:05:10.888836  510068 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1013 22:05:10.907088  510068 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1013 22:05:10.907161  510068 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:05:10.919727  510068 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1013 22:05:10.919817  510068 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:05:10.932019  510068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:05:10.941567  510068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:05:10.951375  510068 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1013 22:05:10.961899  510068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:05:10.972671  510068 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:05:10.988921  510068 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:05:10.998801  510068 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1013 22:05:11.007356  510068 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1013 22:05:11.016548  510068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:05:11.128481  510068 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1013 22:05:11.866877  510068 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1013 22:05:11.866963  510068 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1013 22:05:11.871447  510068 start.go:563] Will wait 60s for crictl version
	I1013 22:05:11.871509  510068 ssh_runner.go:195] Run: which crictl
	I1013 22:05:11.875519  510068 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1013 22:05:11.902820  510068 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1013 22:05:11.902911  510068 ssh_runner.go:195] Run: crio --version
	I1013 22:05:11.933859  510068 ssh_runner.go:195] Run: crio --version
	I1013 22:05:11.967499  510068 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1013 22:05:11.968547  510068 cli_runner.go:164] Run: docker network inspect calico-200102 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 22:05:11.986107  510068 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1013 22:05:11.990478  510068 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 22:05:12.001588  510068 kubeadm.go:883] updating cluster {Name:calico-200102 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-200102 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1013 22:05:12.001693  510068 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 22:05:12.001735  510068 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 22:05:12.037178  510068 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 22:05:12.037208  510068 crio.go:433] Images already preloaded, skipping extraction
	I1013 22:05:12.037264  510068 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 22:05:12.066282  510068 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 22:05:12.066310  510068 cache_images.go:85] Images are preloaded, skipping loading
	I1013 22:05:12.066318  510068 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1013 22:05:12.066404  510068 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=calico-200102 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:calico-200102 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico}
	I1013 22:05:12.066509  510068 ssh_runner.go:195] Run: crio config
	I1013 22:05:12.116528  510068 cni.go:84] Creating CNI manager for "calico"
	I1013 22:05:12.116564  510068 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1013 22:05:12.116593  510068 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-200102 NodeName:calico-200102 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1013 22:05:12.116754  510068 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "calico-200102"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1013 22:05:12.116830  510068 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1013 22:05:12.126115  510068 binaries.go:44] Found k8s binaries, skipping transfer
	I1013 22:05:12.126177  510068 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1013 22:05:12.134784  510068 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1013 22:05:12.149452  510068 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1013 22:05:12.165858  510068 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1013 22:05:12.179349  510068 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1013 22:05:12.183640  510068 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 22:05:12.194961  510068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:05:12.278391  510068 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 22:05:12.301670  510068 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/calico-200102 for IP: 192.168.85.2
	I1013 22:05:12.301701  510068 certs.go:195] generating shared ca certs ...
	I1013 22:05:12.301723  510068 certs.go:227] acquiring lock for ca certs: {Name:mk5abdb742abbab05bc35d961f97579f54806d56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:05:12.301902  510068 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-226873/.minikube/ca.key
	I1013 22:05:12.301971  510068 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-226873/.minikube/proxy-client-ca.key
	I1013 22:05:12.302005  510068 certs.go:257] generating profile certs ...
	I1013 22:05:12.302088  510068 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/calico-200102/client.key
	I1013 22:05:12.302112  510068 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/calico-200102/client.crt with IP's: []
	I1013 22:05:11.813105  501664 addons.go:514] duration metric: took 692.853301ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I1013 22:05:12.057274  501664 kapi.go:214] "coredns" deployment in "kube-system" namespace and "kindnet-200102" context rescaled to 1 replicas
	W1013 22:05:10.611671  505109 pod_ready.go:104] pod "coredns-66bc5c9577-kzq9t" is not "Ready", error: <nil>
	W1013 22:05:13.111625  505109 pod_ready.go:104] pod "coredns-66bc5c9577-kzq9t" is not "Ready", error: <nil>
	W1013 22:05:12.957981  496036 pod_ready.go:104] pod "coredns-66bc5c9577-5x8dn" is not "Ready", error: <nil>
	I1013 22:05:14.957378  496036 pod_ready.go:94] pod "coredns-66bc5c9577-5x8dn" is "Ready"
	I1013 22:05:14.957409  496036 pod_ready.go:86] duration metric: took 37.006265036s for pod "coredns-66bc5c9577-5x8dn" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:05:14.960048  496036 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-505851" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:05:14.963954  496036 pod_ready.go:94] pod "etcd-default-k8s-diff-port-505851" is "Ready"
	I1013 22:05:14.964018  496036 pod_ready.go:86] duration metric: took 3.944974ms for pod "etcd-default-k8s-diff-port-505851" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:05:14.965934  496036 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-505851" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:05:14.969538  496036 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-505851" is "Ready"
	I1013 22:05:14.969561  496036 pod_ready.go:86] duration metric: took 3.602866ms for pod "kube-apiserver-default-k8s-diff-port-505851" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:05:14.971555  496036 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-505851" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:05:15.155773  496036 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-505851" is "Ready"
	I1013 22:05:15.155807  496036 pod_ready.go:86] duration metric: took 184.228441ms for pod "kube-controller-manager-default-k8s-diff-port-505851" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:05:15.356368  496036 pod_ready.go:83] waiting for pod "kube-proxy-27pnt" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:05:15.755300  496036 pod_ready.go:94] pod "kube-proxy-27pnt" is "Ready"
	I1013 22:05:15.755335  496036 pod_ready.go:86] duration metric: took 398.933791ms for pod "kube-proxy-27pnt" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:05:15.955632  496036 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-505851" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:05:16.355575  496036 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-505851" is "Ready"
	I1013 22:05:16.355608  496036 pod_ready.go:86] duration metric: took 399.945662ms for pod "kube-scheduler-default-k8s-diff-port-505851" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:05:16.355624  496036 pod_ready.go:40] duration metric: took 38.410140408s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 22:05:16.409234  496036 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1013 22:05:16.411422  496036 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-505851" cluster and "default" namespace by default
	I1013 22:05:12.767762  510068 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/calico-200102/client.crt ...
	I1013 22:05:12.767793  510068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/calico-200102/client.crt: {Name:mkb46b714d46426c42aba7afd5b837077b9a2d3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:05:12.768025  510068 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/calico-200102/client.key ...
	I1013 22:05:12.768044  510068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/calico-200102/client.key: {Name:mk9ef563cfcf73cd77c8f23e63c10a2813d8195a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:05:12.768160  510068 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/calico-200102/apiserver.key.26a633c3
	I1013 22:05:12.768180  510068 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/calico-200102/apiserver.crt.26a633c3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1013 22:05:12.893545  510068 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/calico-200102/apiserver.crt.26a633c3 ...
	I1013 22:05:12.893581  510068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/calico-200102/apiserver.crt.26a633c3: {Name:mk142a6cc57775ce69692e883da3c4477b1dcf08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:05:12.893824  510068 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/calico-200102/apiserver.key.26a633c3 ...
	I1013 22:05:12.893857  510068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/calico-200102/apiserver.key.26a633c3: {Name:mk7a017cff5c03088d8aaaebd3f515e3d2053adf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:05:12.893986  510068 certs.go:382] copying /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/calico-200102/apiserver.crt.26a633c3 -> /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/calico-200102/apiserver.crt
	I1013 22:05:12.894140  510068 certs.go:386] copying /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/calico-200102/apiserver.key.26a633c3 -> /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/calico-200102/apiserver.key
	I1013 22:05:12.894238  510068 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/calico-200102/proxy-client.key
	I1013 22:05:12.894261  510068 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/calico-200102/proxy-client.crt with IP's: []
	I1013 22:05:12.999959  510068 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/calico-200102/proxy-client.crt ...
	I1013 22:05:13.000004  510068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/calico-200102/proxy-client.crt: {Name:mkd1eea43b2f771966d0a5900a3731d27f60cf11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:05:13.000201  510068 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/calico-200102/proxy-client.key ...
	I1013 22:05:13.000223  510068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/calico-200102/proxy-client.key: {Name:mk65315728dd1a718f10fbcd810639e1b27f39e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:05:13.000442  510068 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/230929.pem (1338 bytes)
	W1013 22:05:13.000496  510068 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-226873/.minikube/certs/230929_empty.pem, impossibly tiny 0 bytes
	I1013 22:05:13.000513  510068 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca-key.pem (1675 bytes)
	I1013 22:05:13.000550  510068 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem (1078 bytes)
	I1013 22:05:13.000585  510068 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/cert.pem (1123 bytes)
	I1013 22:05:13.000618  510068 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/key.pem (1679 bytes)
	I1013 22:05:13.000688  510068 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/files/etc/ssl/certs/2309292.pem (1708 bytes)
	I1013 22:05:13.001336  510068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1013 22:05:13.024464  510068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1013 22:05:13.043041  510068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1013 22:05:13.062435  510068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1013 22:05:13.080885  510068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/calico-200102/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1013 22:05:13.100649  510068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/calico-200102/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1013 22:05:13.121211  510068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/calico-200102/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1013 22:05:13.140903  510068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/calico-200102/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1013 22:05:13.160444  510068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/files/etc/ssl/certs/2309292.pem --> /usr/share/ca-certificates/2309292.pem (1708 bytes)
	I1013 22:05:13.181619  510068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1013 22:05:13.202766  510068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/certs/230929.pem --> /usr/share/ca-certificates/230929.pem (1338 bytes)
	I1013 22:05:13.223639  510068 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1013 22:05:13.237785  510068 ssh_runner.go:195] Run: openssl version
	I1013 22:05:13.244659  510068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2309292.pem && ln -fs /usr/share/ca-certificates/2309292.pem /etc/ssl/certs/2309292.pem"
	I1013 22:05:13.254318  510068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2309292.pem
	I1013 22:05:13.258462  510068 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 13 21:24 /usr/share/ca-certificates/2309292.pem
	I1013 22:05:13.258522  510068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2309292.pem
	I1013 22:05:13.294822  510068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2309292.pem /etc/ssl/certs/3ec20f2e.0"
	I1013 22:05:13.304976  510068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1013 22:05:13.314219  510068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:05:13.318666  510068 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 13 21:18 /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:05:13.318726  510068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:05:13.355667  510068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1013 22:05:13.365323  510068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/230929.pem && ln -fs /usr/share/ca-certificates/230929.pem /etc/ssl/certs/230929.pem"
	I1013 22:05:13.374504  510068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/230929.pem
	I1013 22:05:13.378540  510068 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 13 21:24 /usr/share/ca-certificates/230929.pem
	I1013 22:05:13.378631  510068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/230929.pem
	I1013 22:05:13.416573  510068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/230929.pem /etc/ssl/certs/51391683.0"
	I1013 22:05:13.426828  510068 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1013 22:05:13.430934  510068 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1013 22:05:13.431012  510068 kubeadm.go:400] StartCluster: {Name:calico-200102 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-200102 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 22:05:13.431093  510068 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 22:05:13.431136  510068 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 22:05:13.461432  510068 cri.go:89] found id: ""
	I1013 22:05:13.461490  510068 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1013 22:05:13.470725  510068 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1013 22:05:13.479768  510068 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1013 22:05:13.479821  510068 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1013 22:05:13.488324  510068 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1013 22:05:13.488345  510068 kubeadm.go:157] found existing configuration files:
	
	I1013 22:05:13.488398  510068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1013 22:05:13.496639  510068 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1013 22:05:13.496694  510068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1013 22:05:13.506391  510068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1013 22:05:13.517088  510068 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1013 22:05:13.517148  510068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1013 22:05:13.526789  510068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1013 22:05:13.535104  510068 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1013 22:05:13.535170  510068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1013 22:05:13.543494  510068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1013 22:05:13.551901  510068 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1013 22:05:13.551963  510068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1013 22:05:13.560880  510068 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1013 22:05:13.624077  510068 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1013 22:05:13.685194  510068 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1013 22:05:13.556617  501664 node_ready.go:57] node "kindnet-200102" has "Ready":"False" status (will retry)
	W1013 22:05:15.557197  501664 node_ready.go:57] node "kindnet-200102" has "Ready":"False" status (will retry)
	W1013 22:05:17.557726  501664 node_ready.go:57] node "kindnet-200102" has "Ready":"False" status (will retry)
	W1013 22:05:15.111742  505109 pod_ready.go:104] pod "coredns-66bc5c9577-kzq9t" is not "Ready", error: <nil>
	W1013 22:05:17.111871  505109 pod_ready.go:104] pod "coredns-66bc5c9577-kzq9t" is not "Ready", error: <nil>
	W1013 22:05:20.061147  501664 node_ready.go:57] node "kindnet-200102" has "Ready":"False" status (will retry)
	I1013 22:05:22.569075  501664 node_ready.go:49] node "kindnet-200102" is "Ready"
	I1013 22:05:22.569118  501664 node_ready.go:38] duration metric: took 11.015467972s for node "kindnet-200102" to be "Ready" ...
	I1013 22:05:22.569137  501664 api_server.go:52] waiting for apiserver process to appear ...
	I1013 22:05:22.569206  501664 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 22:05:22.587946  501664 api_server.go:72] duration metric: took 11.46797891s to wait for apiserver process to appear ...
	I1013 22:05:22.587978  501664 api_server.go:88] waiting for apiserver healthz status ...
	I1013 22:05:22.588042  501664 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1013 22:05:22.593031  501664 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1013 22:05:22.594109  501664 api_server.go:141] control plane version: v1.34.1
	I1013 22:05:22.594138  501664 api_server.go:131] duration metric: took 6.152777ms to wait for apiserver health ...
	I1013 22:05:22.594148  501664 system_pods.go:43] waiting for kube-system pods to appear ...
	I1013 22:05:22.597814  501664 system_pods.go:59] 8 kube-system pods found
	I1013 22:05:22.597849  501664 system_pods.go:61] "coredns-66bc5c9577-l4nxp" [32bf1a1a-47c6-4c43-9b93-29cb1395c517] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 22:05:22.597857  501664 system_pods.go:61] "etcd-kindnet-200102" [d26b572f-f6ba-4966-b43d-ad6dae2e5ab1] Running
	I1013 22:05:22.597862  501664 system_pods.go:61] "kindnet-glhzg" [4b41b6cb-6930-47d5-ac4a-8caa5b4466e9] Running
	I1013 22:05:22.597865  501664 system_pods.go:61] "kube-apiserver-kindnet-200102" [9a5167e4-dac0-47a8-88dd-140f99bcc10c] Running
	I1013 22:05:22.597869  501664 system_pods.go:61] "kube-controller-manager-kindnet-200102" [838e6428-b6fd-428a-b2d5-2df1b586f0db] Running
	I1013 22:05:22.597873  501664 system_pods.go:61] "kube-proxy-ppbkr" [8e23e154-3fa0-4154-8630-68c1de100a77] Running
	I1013 22:05:22.597876  501664 system_pods.go:61] "kube-scheduler-kindnet-200102" [ed71ba24-be9c-48ec-94b1-136b125d8b36] Running
	I1013 22:05:22.597880  501664 system_pods.go:61] "storage-provisioner" [2767fd4d-5c53-4d5a-9a82-81cbf7cfefb3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 22:05:22.597895  501664 system_pods.go:74] duration metric: took 3.738699ms to wait for pod list to return data ...
	I1013 22:05:22.597905  501664 default_sa.go:34] waiting for default service account to be created ...
	I1013 22:05:22.600678  501664 default_sa.go:45] found service account: "default"
	I1013 22:05:22.600702  501664 default_sa.go:55] duration metric: took 2.789012ms for default service account to be created ...
	I1013 22:05:22.600714  501664 system_pods.go:116] waiting for k8s-apps to be running ...
	I1013 22:05:22.604289  501664 system_pods.go:86] 8 kube-system pods found
	I1013 22:05:22.604324  501664 system_pods.go:89] "coredns-66bc5c9577-l4nxp" [32bf1a1a-47c6-4c43-9b93-29cb1395c517] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 22:05:22.604338  501664 system_pods.go:89] "etcd-kindnet-200102" [d26b572f-f6ba-4966-b43d-ad6dae2e5ab1] Running
	I1013 22:05:22.604346  501664 system_pods.go:89] "kindnet-glhzg" [4b41b6cb-6930-47d5-ac4a-8caa5b4466e9] Running
	I1013 22:05:22.604352  501664 system_pods.go:89] "kube-apiserver-kindnet-200102" [9a5167e4-dac0-47a8-88dd-140f99bcc10c] Running
	I1013 22:05:22.604365  501664 system_pods.go:89] "kube-controller-manager-kindnet-200102" [838e6428-b6fd-428a-b2d5-2df1b586f0db] Running
	I1013 22:05:22.604370  501664 system_pods.go:89] "kube-proxy-ppbkr" [8e23e154-3fa0-4154-8630-68c1de100a77] Running
	I1013 22:05:22.604375  501664 system_pods.go:89] "kube-scheduler-kindnet-200102" [ed71ba24-be9c-48ec-94b1-136b125d8b36] Running
	I1013 22:05:22.604383  501664 system_pods.go:89] "storage-provisioner" [2767fd4d-5c53-4d5a-9a82-81cbf7cfefb3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 22:05:22.604410  501664 retry.go:31] will retry after 212.861156ms: missing components: kube-dns
	I1013 22:05:22.822132  501664 system_pods.go:86] 8 kube-system pods found
	I1013 22:05:22.822186  501664 system_pods.go:89] "coredns-66bc5c9577-l4nxp" [32bf1a1a-47c6-4c43-9b93-29cb1395c517] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 22:05:22.822194  501664 system_pods.go:89] "etcd-kindnet-200102" [d26b572f-f6ba-4966-b43d-ad6dae2e5ab1] Running
	I1013 22:05:22.822202  501664 system_pods.go:89] "kindnet-glhzg" [4b41b6cb-6930-47d5-ac4a-8caa5b4466e9] Running
	I1013 22:05:22.822215  501664 system_pods.go:89] "kube-apiserver-kindnet-200102" [9a5167e4-dac0-47a8-88dd-140f99bcc10c] Running
	I1013 22:05:22.822222  501664 system_pods.go:89] "kube-controller-manager-kindnet-200102" [838e6428-b6fd-428a-b2d5-2df1b586f0db] Running
	I1013 22:05:22.822236  501664 system_pods.go:89] "kube-proxy-ppbkr" [8e23e154-3fa0-4154-8630-68c1de100a77] Running
	I1013 22:05:22.822241  501664 system_pods.go:89] "kube-scheduler-kindnet-200102" [ed71ba24-be9c-48ec-94b1-136b125d8b36] Running
	I1013 22:05:22.822254  501664 system_pods.go:89] "storage-provisioner" [2767fd4d-5c53-4d5a-9a82-81cbf7cfefb3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 22:05:22.822273  501664 retry.go:31] will retry after 235.519358ms: missing components: kube-dns
	I1013 22:05:23.515178  510068 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1013 22:05:23.515261  510068 kubeadm.go:318] [preflight] Running pre-flight checks
	I1013 22:05:23.515375  510068 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1013 22:05:23.515455  510068 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1013 22:05:23.515509  510068 kubeadm.go:318] OS: Linux
	I1013 22:05:23.515582  510068 kubeadm.go:318] CGROUPS_CPU: enabled
	I1013 22:05:23.515662  510068 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1013 22:05:23.515749  510068 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1013 22:05:23.515829  510068 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1013 22:05:23.515899  510068 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1013 22:05:23.515983  510068 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1013 22:05:23.516092  510068 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1013 22:05:23.516163  510068 kubeadm.go:318] CGROUPS_IO: enabled
	I1013 22:05:23.516284  510068 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1013 22:05:23.516429  510068 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1013 22:05:23.516544  510068 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1013 22:05:23.516621  510068 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1013 22:05:23.518293  510068 out.go:252]   - Generating certificates and keys ...
	I1013 22:05:23.518368  510068 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1013 22:05:23.518425  510068 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1013 22:05:23.518510  510068 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1013 22:05:23.518608  510068 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1013 22:05:23.518704  510068 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1013 22:05:23.518783  510068 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1013 22:05:23.518870  510068 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1013 22:05:23.519083  510068 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [calico-200102 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1013 22:05:23.519175  510068 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1013 22:05:23.519315  510068 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [calico-200102 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1013 22:05:23.519415  510068 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1013 22:05:23.519512  510068 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1013 22:05:23.519574  510068 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1013 22:05:23.519654  510068 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1013 22:05:23.519766  510068 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1013 22:05:23.519853  510068 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1013 22:05:23.519931  510068 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1013 22:05:23.520053  510068 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1013 22:05:23.520139  510068 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1013 22:05:23.520248  510068 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1013 22:05:23.520363  510068 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1013 22:05:23.521863  510068 out.go:252]   - Booting up control plane ...
	I1013 22:05:23.521939  510068 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1013 22:05:23.522032  510068 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1013 22:05:23.522093  510068 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1013 22:05:23.522199  510068 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1013 22:05:23.522298  510068 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1013 22:05:23.522404  510068 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1013 22:05:23.522491  510068 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1013 22:05:23.522538  510068 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1013 22:05:23.522657  510068 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1013 22:05:23.522755  510068 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1013 22:05:23.522815  510068 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001913234s
	I1013 22:05:23.522911  510068 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1013 22:05:23.523011  510068 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1013 22:05:23.523106  510068 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1013 22:05:23.523176  510068 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1013 22:05:23.523287  510068 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 795.308239ms
	I1013 22:05:23.523404  510068 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 1.82867851s
	I1013 22:05:23.523512  510068 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 3.501444312s
	I1013 22:05:23.523693  510068 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1013 22:05:23.523889  510068 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1013 22:05:23.524002  510068 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1013 22:05:23.524281  510068 kubeadm.go:318] [mark-control-plane] Marking the node calico-200102 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1013 22:05:23.524372  510068 kubeadm.go:318] [bootstrap-token] Using token: mye8a6.r1jyitw9zae9z8t9
	W1013 22:05:19.611754  505109 pod_ready.go:104] pod "coredns-66bc5c9577-kzq9t" is not "Ready", error: <nil>
	W1013 22:05:22.111391  505109 pod_ready.go:104] pod "coredns-66bc5c9577-kzq9t" is not "Ready", error: <nil>
	I1013 22:05:23.525789  510068 out.go:252]   - Configuring RBAC rules ...
	I1013 22:05:23.525968  510068 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1013 22:05:23.526131  510068 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1013 22:05:23.526323  510068 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1013 22:05:23.526500  510068 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1013 22:05:23.526664  510068 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1013 22:05:23.526794  510068 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1013 22:05:23.526960  510068 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1013 22:05:23.527074  510068 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1013 22:05:23.527151  510068 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1013 22:05:23.527167  510068 kubeadm.go:318] 
	I1013 22:05:23.527243  510068 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1013 22:05:23.527260  510068 kubeadm.go:318] 
	I1013 22:05:23.527350  510068 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1013 22:05:23.527366  510068 kubeadm.go:318] 
	I1013 22:05:23.527398  510068 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1013 22:05:23.527479  510068 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1013 22:05:23.527556  510068 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1013 22:05:23.527571  510068 kubeadm.go:318] 
	I1013 22:05:23.527644  510068 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1013 22:05:23.527653  510068 kubeadm.go:318] 
	I1013 22:05:23.527714  510068 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1013 22:05:23.527724  510068 kubeadm.go:318] 
	I1013 22:05:23.527788  510068 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1013 22:05:23.527884  510068 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1013 22:05:23.527959  510068 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1013 22:05:23.527972  510068 kubeadm.go:318] 
	I1013 22:05:23.528097  510068 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1013 22:05:23.528185  510068 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1013 22:05:23.528190  510068 kubeadm.go:318] 
	I1013 22:05:23.528280  510068 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token mye8a6.r1jyitw9zae9z8t9 \
	I1013 22:05:23.528396  510068 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:00bf5d6d0f4ef7bd9334b23e90d8fd2b7e452995fe6a96d4fd7aebd6540a9956 \
	I1013 22:05:23.528423  510068 kubeadm.go:318] 	--control-plane 
	I1013 22:05:23.528428  510068 kubeadm.go:318] 
	I1013 22:05:23.528525  510068 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1013 22:05:23.528530  510068 kubeadm.go:318] 
	I1013 22:05:23.528631  510068 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token mye8a6.r1jyitw9zae9z8t9 \
	I1013 22:05:23.528767  510068 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:00bf5d6d0f4ef7bd9334b23e90d8fd2b7e452995fe6a96d4fd7aebd6540a9956 
	I1013 22:05:23.528778  510068 cni.go:84] Creating CNI manager for "calico"
	I1013 22:05:23.531221  510068 out.go:179] * Configuring Calico (Container Networking Interface) ...
	I1013 22:05:23.061843  501664 system_pods.go:86] 8 kube-system pods found
	I1013 22:05:23.061881  501664 system_pods.go:89] "coredns-66bc5c9577-l4nxp" [32bf1a1a-47c6-4c43-9b93-29cb1395c517] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 22:05:23.061889  501664 system_pods.go:89] "etcd-kindnet-200102" [d26b572f-f6ba-4966-b43d-ad6dae2e5ab1] Running
	I1013 22:05:23.061893  501664 system_pods.go:89] "kindnet-glhzg" [4b41b6cb-6930-47d5-ac4a-8caa5b4466e9] Running
	I1013 22:05:23.061897  501664 system_pods.go:89] "kube-apiserver-kindnet-200102" [9a5167e4-dac0-47a8-88dd-140f99bcc10c] Running
	I1013 22:05:23.061900  501664 system_pods.go:89] "kube-controller-manager-kindnet-200102" [838e6428-b6fd-428a-b2d5-2df1b586f0db] Running
	I1013 22:05:23.061905  501664 system_pods.go:89] "kube-proxy-ppbkr" [8e23e154-3fa0-4154-8630-68c1de100a77] Running
	I1013 22:05:23.061908  501664 system_pods.go:89] "kube-scheduler-kindnet-200102" [ed71ba24-be9c-48ec-94b1-136b125d8b36] Running
	I1013 22:05:23.061913  501664 system_pods.go:89] "storage-provisioner" [2767fd4d-5c53-4d5a-9a82-81cbf7cfefb3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 22:05:23.061932  501664 retry.go:31] will retry after 429.009418ms: missing components: kube-dns
	I1013 22:05:23.494806  501664 system_pods.go:86] 8 kube-system pods found
	I1013 22:05:23.494859  501664 system_pods.go:89] "coredns-66bc5c9577-l4nxp" [32bf1a1a-47c6-4c43-9b93-29cb1395c517] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 22:05:23.494869  501664 system_pods.go:89] "etcd-kindnet-200102" [d26b572f-f6ba-4966-b43d-ad6dae2e5ab1] Running
	I1013 22:05:23.494875  501664 system_pods.go:89] "kindnet-glhzg" [4b41b6cb-6930-47d5-ac4a-8caa5b4466e9] Running
	I1013 22:05:23.494880  501664 system_pods.go:89] "kube-apiserver-kindnet-200102" [9a5167e4-dac0-47a8-88dd-140f99bcc10c] Running
	I1013 22:05:23.494885  501664 system_pods.go:89] "kube-controller-manager-kindnet-200102" [838e6428-b6fd-428a-b2d5-2df1b586f0db] Running
	I1013 22:05:23.494891  501664 system_pods.go:89] "kube-proxy-ppbkr" [8e23e154-3fa0-4154-8630-68c1de100a77] Running
	I1013 22:05:23.494896  501664 system_pods.go:89] "kube-scheduler-kindnet-200102" [ed71ba24-be9c-48ec-94b1-136b125d8b36] Running
	I1013 22:05:23.494903  501664 system_pods.go:89] "storage-provisioner" [2767fd4d-5c53-4d5a-9a82-81cbf7cfefb3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 22:05:23.494924  501664 retry.go:31] will retry after 417.528613ms: missing components: kube-dns
	I1013 22:05:23.917336  501664 system_pods.go:86] 8 kube-system pods found
	I1013 22:05:23.917367  501664 system_pods.go:89] "coredns-66bc5c9577-l4nxp" [32bf1a1a-47c6-4c43-9b93-29cb1395c517] Running
	I1013 22:05:23.917373  501664 system_pods.go:89] "etcd-kindnet-200102" [d26b572f-f6ba-4966-b43d-ad6dae2e5ab1] Running
	I1013 22:05:23.917377  501664 system_pods.go:89] "kindnet-glhzg" [4b41b6cb-6930-47d5-ac4a-8caa5b4466e9] Running
	I1013 22:05:23.917381  501664 system_pods.go:89] "kube-apiserver-kindnet-200102" [9a5167e4-dac0-47a8-88dd-140f99bcc10c] Running
	I1013 22:05:23.917384  501664 system_pods.go:89] "kube-controller-manager-kindnet-200102" [838e6428-b6fd-428a-b2d5-2df1b586f0db] Running
	I1013 22:05:23.917388  501664 system_pods.go:89] "kube-proxy-ppbkr" [8e23e154-3fa0-4154-8630-68c1de100a77] Running
	I1013 22:05:23.917391  501664 system_pods.go:89] "kube-scheduler-kindnet-200102" [ed71ba24-be9c-48ec-94b1-136b125d8b36] Running
	I1013 22:05:23.917395  501664 system_pods.go:89] "storage-provisioner" [2767fd4d-5c53-4d5a-9a82-81cbf7cfefb3] Running
	I1013 22:05:23.917404  501664 system_pods.go:126] duration metric: took 1.316682025s to wait for k8s-apps to be running ...
	I1013 22:05:23.917414  501664 system_svc.go:44] waiting for kubelet service to be running ....
	I1013 22:05:23.917466  501664 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 22:05:23.935495  501664 system_svc.go:56] duration metric: took 18.065606ms WaitForService to wait for kubelet
	I1013 22:05:23.935546  501664 kubeadm.go:586] duration metric: took 12.815588536s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 22:05:23.935574  501664 node_conditions.go:102] verifying NodePressure condition ...
	I1013 22:05:23.939341  501664 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1013 22:05:23.939377  501664 node_conditions.go:123] node cpu capacity is 8
	I1013 22:05:23.939393  501664 node_conditions.go:105] duration metric: took 3.812781ms to run NodePressure ...
	I1013 22:05:23.939409  501664 start.go:241] waiting for startup goroutines ...
	I1013 22:05:23.939422  501664 start.go:246] waiting for cluster config update ...
	I1013 22:05:23.939436  501664 start.go:255] writing updated cluster config ...
	I1013 22:05:23.939771  501664 ssh_runner.go:195] Run: rm -f paused
	I1013 22:05:23.945463  501664 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 22:05:23.950180  501664 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-l4nxp" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:05:23.955649  501664 pod_ready.go:94] pod "coredns-66bc5c9577-l4nxp" is "Ready"
	I1013 22:05:23.955682  501664 pod_ready.go:86] duration metric: took 5.476054ms for pod "coredns-66bc5c9577-l4nxp" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:05:23.958233  501664 pod_ready.go:83] waiting for pod "etcd-kindnet-200102" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:05:23.963222  501664 pod_ready.go:94] pod "etcd-kindnet-200102" is "Ready"
	I1013 22:05:23.963255  501664 pod_ready.go:86] duration metric: took 4.994304ms for pod "etcd-kindnet-200102" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:05:23.965583  501664 pod_ready.go:83] waiting for pod "kube-apiserver-kindnet-200102" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:05:23.970232  501664 pod_ready.go:94] pod "kube-apiserver-kindnet-200102" is "Ready"
	I1013 22:05:23.970260  501664 pod_ready.go:86] duration metric: took 4.653962ms for pod "kube-apiserver-kindnet-200102" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:05:23.972668  501664 pod_ready.go:83] waiting for pod "kube-controller-manager-kindnet-200102" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:05:24.351055  501664 pod_ready.go:94] pod "kube-controller-manager-kindnet-200102" is "Ready"
	I1013 22:05:24.351089  501664 pod_ready.go:86] duration metric: took 378.386921ms for pod "kube-controller-manager-kindnet-200102" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:05:24.551099  501664 pod_ready.go:83] waiting for pod "kube-proxy-ppbkr" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:05:24.950531  501664 pod_ready.go:94] pod "kube-proxy-ppbkr" is "Ready"
	I1013 22:05:24.950562  501664 pod_ready.go:86] duration metric: took 399.435549ms for pod "kube-proxy-ppbkr" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:05:25.151177  501664 pod_ready.go:83] waiting for pod "kube-scheduler-kindnet-200102" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:05:25.550933  501664 pod_ready.go:94] pod "kube-scheduler-kindnet-200102" is "Ready"
	I1013 22:05:25.550968  501664 pod_ready.go:86] duration metric: took 399.763244ms for pod "kube-scheduler-kindnet-200102" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:05:25.551025  501664 pod_ready.go:40] duration metric: took 1.605516253s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 22:05:25.599669  501664 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1013 22:05:25.601833  501664 out.go:179] * Done! kubectl is now configured to use "kindnet-200102" cluster and "default" namespace by default
	I1013 22:05:23.533882  510068 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1013 22:05:23.533909  510068 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (539470 bytes)
	I1013 22:05:23.552779  510068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1013 22:05:24.430184  510068 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1013 22:05:24.430344  510068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:05:24.430455  510068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes calico-200102 minikube.k8s.io/updated_at=2025_10_13T22_05_24_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=18273cc699fc357fbf4f93654efe4966698a9f22 minikube.k8s.io/name=calico-200102 minikube.k8s.io/primary=true
	I1013 22:05:24.442633  510068 ops.go:34] apiserver oom_adj: -16
	I1013 22:05:24.505508  510068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:05:25.006121  510068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:05:25.506207  510068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:05:26.005725  510068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:05:26.506224  510068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:05:27.006060  510068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:05:27.505887  510068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1013 22:05:24.114346  505109 pod_ready.go:104] pod "coredns-66bc5c9577-kzq9t" is not "Ready", error: <nil>
	W1013 22:05:26.611754  505109 pod_ready.go:104] pod "coredns-66bc5c9577-kzq9t" is not "Ready", error: <nil>
	I1013 22:05:28.005547  510068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:05:28.506422  510068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:05:28.594025  510068 kubeadm.go:1113] duration metric: took 4.163747427s to wait for elevateKubeSystemPrivileges
	I1013 22:05:28.594067  510068 kubeadm.go:402] duration metric: took 15.163058578s to StartCluster
	I1013 22:05:28.594091  510068 settings.go:142] acquiring lock: {Name:mk13008e3b2fce0e368bddbf00d43b8340210d33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:05:28.594198  510068 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-226873/kubeconfig
	I1013 22:05:28.596275  510068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/kubeconfig: {Name:mk2f336b13d09ff6e6da9e86905651541ce51ca0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:05:28.596506  510068 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1013 22:05:28.596527  510068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1013 22:05:28.596595  510068 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1013 22:05:28.596690  510068 addons.go:69] Setting storage-provisioner=true in profile "calico-200102"
	I1013 22:05:28.596718  510068 addons.go:238] Setting addon storage-provisioner=true in "calico-200102"
	I1013 22:05:28.596720  510068 config.go:182] Loaded profile config "calico-200102": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:05:28.596756  510068 host.go:66] Checking if "calico-200102" exists ...
	I1013 22:05:28.596776  510068 addons.go:69] Setting default-storageclass=true in profile "calico-200102"
	I1013 22:05:28.596800  510068 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-200102"
	I1013 22:05:28.597274  510068 cli_runner.go:164] Run: docker container inspect calico-200102 --format={{.State.Status}}
	I1013 22:05:28.597352  510068 cli_runner.go:164] Run: docker container inspect calico-200102 --format={{.State.Status}}
	I1013 22:05:28.598213  510068 out.go:179] * Verifying Kubernetes components...
	I1013 22:05:28.599521  510068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:05:28.625842  510068 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1013 22:05:28.627374  510068 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 22:05:28.627396  510068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1013 22:05:28.627457  510068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-200102
	I1013 22:05:28.630419  510068 addons.go:238] Setting addon default-storageclass=true in "calico-200102"
	I1013 22:05:28.630464  510068 host.go:66] Checking if "calico-200102" exists ...
	I1013 22:05:28.630864  510068 cli_runner.go:164] Run: docker container inspect calico-200102 --format={{.State.Status}}
	I1013 22:05:28.653086  510068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/calico-200102/id_rsa Username:docker}
	I1013 22:05:28.657165  510068 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1013 22:05:28.657305  510068 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1013 22:05:28.657377  510068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-200102
	I1013 22:05:28.680285  510068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/calico-200102/id_rsa Username:docker}
	I1013 22:05:28.700254  510068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1013 22:05:28.768442  510068 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 22:05:28.784087  510068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 22:05:28.799160  510068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1013 22:05:28.908126  510068 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1013 22:05:28.910092  510068 node_ready.go:35] waiting up to 15m0s for node "calico-200102" to be "Ready" ...
	I1013 22:05:29.140656  510068 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1013 22:05:29.141984  510068 addons.go:514] duration metric: took 545.384489ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1013 22:05:29.414742  510068 kapi.go:214] "coredns" deployment in "kube-system" namespace and "calico-200102" context rescaled to 1 replicas
	W1013 22:05:30.914682  510068 node_ready.go:57] node "calico-200102" has "Ready":"False" status (will retry)
	W1013 22:05:28.614514  505109 pod_ready.go:104] pod "coredns-66bc5c9577-kzq9t" is not "Ready", error: <nil>
	W1013 22:05:31.112685  505109 pod_ready.go:104] pod "coredns-66bc5c9577-kzq9t" is not "Ready", error: <nil>
	W1013 22:05:33.113216  505109 pod_ready.go:104] pod "coredns-66bc5c9577-kzq9t" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 13 22:04:48 default-k8s-diff-port-505851 crio[562]: time="2025-10-13T22:04:48.464511356Z" level=info msg="Created container e976a19b88a83fe02afbf94aefc984bcec5775ad24483eea6e341b91a0ab5470: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-2xpgc/kubernetes-dashboard" id=c15e343e-21dc-4398-8a35-c8477ad76dd5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:04:48 default-k8s-diff-port-505851 crio[562]: time="2025-10-13T22:04:48.465537397Z" level=info msg="Starting container: e976a19b88a83fe02afbf94aefc984bcec5775ad24483eea6e341b91a0ab5470" id=bc337f74-9c3f-4589-bc02-7159cbb3ab88 name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 22:04:48 default-k8s-diff-port-505851 crio[562]: time="2025-10-13T22:04:48.46795386Z" level=info msg="Started container" PID=1721 containerID=e976a19b88a83fe02afbf94aefc984bcec5775ad24483eea6e341b91a0ab5470 description=kubernetes-dashboard/kubernetes-dashboard-855c9754f9-2xpgc/kubernetes-dashboard id=bc337f74-9c3f-4589-bc02-7159cbb3ab88 name=/runtime.v1.RuntimeService/StartContainer sandboxID=02fe94f0f6776d94f67e73e143858b641d2270e4b46439a1f9d3e19c9ef4fb76
	Oct 13 22:05:04 default-k8s-diff-port-505851 crio[562]: time="2025-10-13T22:05:04.032565808Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=a5e3ba48-6a9d-47e1-89ae-c9da8f844cb4 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:05:04 default-k8s-diff-port-505851 crio[562]: time="2025-10-13T22:05:04.03417933Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=ac25a4ca-d15f-45eb-b5b3-e3577b1c35ef name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:05:04 default-k8s-diff-port-505851 crio[562]: time="2025-10-13T22:05:04.038115606Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k87hj/dashboard-metrics-scraper" id=cdbc32b9-5d3b-4e38-9268-48c93de44bb1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:05:04 default-k8s-diff-port-505851 crio[562]: time="2025-10-13T22:05:04.039288657Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:05:04 default-k8s-diff-port-505851 crio[562]: time="2025-10-13T22:05:04.048363923Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:05:04 default-k8s-diff-port-505851 crio[562]: time="2025-10-13T22:05:04.052168803Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:05:04 default-k8s-diff-port-505851 crio[562]: time="2025-10-13T22:05:04.092863746Z" level=info msg="Created container ae38e1db9769544ad8187b6bca19aaae3cebfcbaec340f2d13559004fffb61c7: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k87hj/dashboard-metrics-scraper" id=cdbc32b9-5d3b-4e38-9268-48c93de44bb1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:05:04 default-k8s-diff-port-505851 crio[562]: time="2025-10-13T22:05:04.094722994Z" level=info msg="Starting container: ae38e1db9769544ad8187b6bca19aaae3cebfcbaec340f2d13559004fffb61c7" id=470361e5-745e-4ebe-be2d-31001ac0bd81 name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 22:05:04 default-k8s-diff-port-505851 crio[562]: time="2025-10-13T22:05:04.098821906Z" level=info msg="Started container" PID=1737 containerID=ae38e1db9769544ad8187b6bca19aaae3cebfcbaec340f2d13559004fffb61c7 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k87hj/dashboard-metrics-scraper id=470361e5-745e-4ebe-be2d-31001ac0bd81 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e6d149ff7fed67aa6cd26de59f4b11938e5c5377d70a88a5250d0026ed632337
	Oct 13 22:05:04 default-k8s-diff-port-505851 crio[562]: time="2025-10-13T22:05:04.163532913Z" level=info msg="Removing container: 4f25936df743ff4c35d0faa599504b74c2e0654ccc9bf715f073dbac179b0ab8" id=11c739a8-1595-4fa6-9206-92ca7b590ccf name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 13 22:05:04 default-k8s-diff-port-505851 crio[562]: time="2025-10-13T22:05:04.181272739Z" level=info msg="Removed container 4f25936df743ff4c35d0faa599504b74c2e0654ccc9bf715f073dbac179b0ab8: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k87hj/dashboard-metrics-scraper" id=11c739a8-1595-4fa6-9206-92ca7b590ccf name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 13 22:05:08 default-k8s-diff-port-505851 crio[562]: time="2025-10-13T22:05:08.176020758Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=53664a35-4341-411a-a88a-e14834b94232 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:05:08 default-k8s-diff-port-505851 crio[562]: time="2025-10-13T22:05:08.176981109Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=e2dc8d86-717e-4e99-91c2-0944da48aafb name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:05:08 default-k8s-diff-port-505851 crio[562]: time="2025-10-13T22:05:08.178093492Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=1c200446-263a-46d4-bcc7-85ca149affd9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:05:08 default-k8s-diff-port-505851 crio[562]: time="2025-10-13T22:05:08.178404367Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:05:08 default-k8s-diff-port-505851 crio[562]: time="2025-10-13T22:05:08.184290224Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:05:08 default-k8s-diff-port-505851 crio[562]: time="2025-10-13T22:05:08.184492274Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/c4bc52c201c482b143c8db07a5e15f76758faf44781cff564cd1d01f76b4459e/merged/etc/passwd: no such file or directory"
	Oct 13 22:05:08 default-k8s-diff-port-505851 crio[562]: time="2025-10-13T22:05:08.184531803Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/c4bc52c201c482b143c8db07a5e15f76758faf44781cff564cd1d01f76b4459e/merged/etc/group: no such file or directory"
	Oct 13 22:05:08 default-k8s-diff-port-505851 crio[562]: time="2025-10-13T22:05:08.185159006Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:05:08 default-k8s-diff-port-505851 crio[562]: time="2025-10-13T22:05:08.218657405Z" level=info msg="Created container 62e1fa758a47ee529eab2178badec20856414d8ddeb60f0cc0c72ffdb14dc220: kube-system/storage-provisioner/storage-provisioner" id=1c200446-263a-46d4-bcc7-85ca149affd9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:05:08 default-k8s-diff-port-505851 crio[562]: time="2025-10-13T22:05:08.219413991Z" level=info msg="Starting container: 62e1fa758a47ee529eab2178badec20856414d8ddeb60f0cc0c72ffdb14dc220" id=5b6f0bd7-3a9c-4725-828d-ef04f08b421f name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 22:05:08 default-k8s-diff-port-505851 crio[562]: time="2025-10-13T22:05:08.221578526Z" level=info msg="Started container" PID=1751 containerID=62e1fa758a47ee529eab2178badec20856414d8ddeb60f0cc0c72ffdb14dc220 description=kube-system/storage-provisioner/storage-provisioner id=5b6f0bd7-3a9c-4725-828d-ef04f08b421f name=/runtime.v1.RuntimeService/StartContainer sandboxID=349e6e12e6a2f647c2249f26142e1cda0e6da42211083b86476a891760e4bb9d
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	62e1fa758a47e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           26 seconds ago       Running             storage-provisioner         1                   349e6e12e6a2f       storage-provisioner                                    kube-system
	ae38e1db97695       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           30 seconds ago       Exited              dashboard-metrics-scraper   2                   e6d149ff7fed6       dashboard-metrics-scraper-6ffb444bf9-k87hj             kubernetes-dashboard
	e976a19b88a83       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   46 seconds ago       Running             kubernetes-dashboard        0                   02fe94f0f6776       kubernetes-dashboard-855c9754f9-2xpgc                  kubernetes-dashboard
	47a2f7003ce93       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           57 seconds ago       Running             coredns                     0                   945197391dfe9       coredns-66bc5c9577-5x8dn                               kube-system
	f51ad4d1b0bc3       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           57 seconds ago       Running             busybox                     1                   e1c80b27dcf65       busybox                                                default
	73688ac341637       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           57 seconds ago       Running             kube-proxy                  0                   7d26cb2a98598       kube-proxy-27pnt                                       kube-system
	56588477c61cd       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           57 seconds ago       Running             kindnet-cni                 0                   e29b793bb5f80       kindnet-m5whc                                          kube-system
	648d22473246e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           57 seconds ago       Exited              storage-provisioner         0                   349e6e12e6a2f       storage-provisioner                                    kube-system
	adda782c2ba2a       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           About a minute ago   Running             kube-controller-manager     0                   a42eec82a0ad5       kube-controller-manager-default-k8s-diff-port-505851   kube-system
	90e2257cdef16       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           About a minute ago   Running             kube-scheduler              0                   d5f75859c9e3a       kube-scheduler-default-k8s-diff-port-505851            kube-system
	a4123f9428043       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           About a minute ago   Running             kube-apiserver              0                   2ee5dd07f5855       kube-apiserver-default-k8s-diff-port-505851            kube-system
	4e42bb1ca9412       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           About a minute ago   Running             etcd                        0                   98c579e62ad92       etcd-default-k8s-diff-port-505851                      kube-system
	
	
	==> coredns [47a2f7003ce93cda6369bdcfca70a589ca8b8c7e50b0ec90f8b055885ba36ed6] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40613 - 45690 "HINFO IN 6117315671624123169.9074301599654801202. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.121960443s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-505851
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-505851
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=18273cc699fc357fbf4f93654efe4966698a9f22
	                    minikube.k8s.io/name=default-k8s-diff-port-505851
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_13T22_03_39_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 13 Oct 2025 22:03:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-505851
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 13 Oct 2025 22:05:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 13 Oct 2025 22:05:27 +0000   Mon, 13 Oct 2025 22:03:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 13 Oct 2025 22:05:27 +0000   Mon, 13 Oct 2025 22:03:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 13 Oct 2025 22:05:27 +0000   Mon, 13 Oct 2025 22:03:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 13 Oct 2025 22:05:27 +0000   Mon, 13 Oct 2025 22:03:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-505851
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863444Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863444Ki
	  pods:               110
	System Info:
	  Machine ID:                 66f4c9907daa2bafbf78246f68ed04ba
	  System UUID:                ff284ab0-6ab9-4288-9f40-64d181496243
	  Boot ID:                    981b04c4-08c1-4321-af44-0fa889f5f1d8
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 coredns-66bc5c9577-5x8dn                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     110s
	  kube-system                 etcd-default-k8s-diff-port-505851                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         116s
	  kube-system                 kindnet-m5whc                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      111s
	  kube-system                 kube-apiserver-default-k8s-diff-port-505851             250m (3%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-505851    200m (2%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-proxy-27pnt                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-scheduler-default-k8s-diff-port-505851             100m (1%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-k87hj              0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-2xpgc                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 109s                 kube-proxy       
	  Normal  Starting                 57s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m1s (x8 over 2m1s)  kubelet          Node default-k8s-diff-port-505851 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m1s (x8 over 2m1s)  kubelet          Node default-k8s-diff-port-505851 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m1s (x8 over 2m1s)  kubelet          Node default-k8s-diff-port-505851 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     116s                 kubelet          Node default-k8s-diff-port-505851 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  116s                 kubelet          Node default-k8s-diff-port-505851 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    116s                 kubelet          Node default-k8s-diff-port-505851 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 116s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           112s                 node-controller  Node default-k8s-diff-port-505851 event: Registered Node default-k8s-diff-port-505851 in Controller
	  Normal  NodeReady                99s                  kubelet          Node default-k8s-diff-port-505851 status is now: NodeReady
	  Normal  Starting                 60s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  60s (x8 over 60s)    kubelet          Node default-k8s-diff-port-505851 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    60s (x8 over 60s)    kubelet          Node default-k8s-diff-port-505851 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     60s (x8 over 60s)    kubelet          Node default-k8s-diff-port-505851 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           55s                  node-controller  Node default-k8s-diff-port-505851 event: Registered Node default-k8s-diff-port-505851 in Controller
	
	
	==> dmesg <==
	[  +0.099627] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.027042] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.612816] kauditd_printk_skb: 47 callbacks suppressed
	[Oct13 21:21] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +1.026013] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +1.023870] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +1.023931] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +1.023844] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +1.023883] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +2.047734] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +4.031606] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +8.511203] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[ +16.382178] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[Oct13 21:22] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	
	
	==> etcd [4e42bb1ca9412735b924cae876a0503b479855539f2a50a515e9f235dd2a15ee] <==
	{"level":"warn","ts":"2025-10-13T22:04:35.785142Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:04:35.793476Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:04:35.800031Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:04:35.807586Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:04:35.814223Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:04:35.832674Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:04:35.839830Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:04:35.847092Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:04:35.901803Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48494","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-13T22:04:41.874509Z","caller":"traceutil/trace.go:172","msg":"trace[61837538] transaction","detail":"{read_only:false; response_revision:546; number_of_response:1; }","duration":"118.415396ms","start":"2025-10-13T22:04:41.756072Z","end":"2025-10-13T22:04:41.874488Z","steps":["trace[61837538] 'process raft request'  (duration: 41.476629ms)","trace[61837538] 'compare'  (duration: 76.800513ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-13T22:04:42.057176Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"103.254584ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-5x8dn\" limit:1 ","response":"range_response_count:1 size:5944"}
	{"level":"info","ts":"2025-10-13T22:04:42.057267Z","caller":"traceutil/trace.go:172","msg":"trace[1749292618] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-5x8dn; range_end:; response_count:1; response_revision:548; }","duration":"103.370668ms","start":"2025-10-13T22:04:41.953885Z","end":"2025-10-13T22:04:42.057256Z","steps":["trace[1749292618] 'agreement among raft nodes before linearized reading'  (duration: 86.864245ms)","trace[1749292618] 'range keys from in-memory index tree'  (duration: 16.29555ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-13T22:04:42.057204Z","caller":"traceutil/trace.go:172","msg":"trace[1173754598] transaction","detail":"{read_only:false; response_revision:549; number_of_response:1; }","duration":"144.925267ms","start":"2025-10-13T22:04:41.912254Z","end":"2025-10-13T22:04:42.057179Z","steps":["trace[1173754598] 'process raft request'  (duration: 128.567428ms)","trace[1173754598] 'compare'  (duration: 16.225004ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-13T22:04:42.196570Z","caller":"traceutil/trace.go:172","msg":"trace[574587180] transaction","detail":"{read_only:false; response_revision:550; number_of_response:1; }","duration":"134.708251ms","start":"2025-10-13T22:04:42.061837Z","end":"2025-10-13T22:04:42.196545Z","steps":["trace[574587180] 'process raft request'  (duration: 125.849322ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-13T22:04:42.375114Z","caller":"traceutil/trace.go:172","msg":"trace[164140328] transaction","detail":"{read_only:false; response_revision:551; number_of_response:1; }","duration":"173.218299ms","start":"2025-10-13T22:04:42.201880Z","end":"2025-10-13T22:04:42.375098Z","steps":["trace[164140328] 'process raft request'  (duration: 173.022409ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-13T22:04:42.512082Z","caller":"traceutil/trace.go:172","msg":"trace[1934766990] transaction","detail":"{read_only:false; response_revision:552; number_of_response:1; }","duration":"132.552268ms","start":"2025-10-13T22:04:42.379511Z","end":"2025-10-13T22:04:42.512064Z","steps":["trace[1934766990] 'process raft request'  (duration: 98.435319ms)","trace[1934766990] 'compare'  (duration: 33.956985ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-13T22:04:42.711264Z","caller":"traceutil/trace.go:172","msg":"trace[1507671136] transaction","detail":"{read_only:false; response_revision:553; number_of_response:1; }","duration":"194.662417ms","start":"2025-10-13T22:04:42.516574Z","end":"2025-10-13T22:04:42.711236Z","steps":["trace[1507671136] 'process raft request'  (duration: 127.876323ms)","trace[1507671136] 'compare'  (duration: 66.631691ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-13T22:04:42.946590Z","caller":"traceutil/trace.go:172","msg":"trace[467698538] transaction","detail":"{read_only:false; response_revision:557; number_of_response:1; }","duration":"161.986081ms","start":"2025-10-13T22:04:42.784587Z","end":"2025-10-13T22:04:42.946574Z","steps":["trace[467698538] 'process raft request'  (duration: 129.686885ms)","trace[467698538] 'compare'  (duration: 32.173011ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-13T22:04:43.067772Z","caller":"traceutil/trace.go:172","msg":"trace[1360641468] linearizableReadLoop","detail":"{readStateIndex:585; appliedIndex:585; }","duration":"114.528233ms","start":"2025-10-13T22:04:42.953215Z","end":"2025-10-13T22:04:43.067743Z","steps":["trace[1360641468] 'read index received'  (duration: 114.515674ms)","trace[1360641468] 'applied index is now lower than readState.Index'  (duration: 11.359µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-13T22:04:43.078280Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"125.02167ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-5x8dn\" limit:1 ","response":"range_response_count:1 size:5944"}
	{"level":"info","ts":"2025-10-13T22:04:43.078354Z","caller":"traceutil/trace.go:172","msg":"trace[1023813438] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-5x8dn; range_end:; response_count:1; response_revision:557; }","duration":"125.130272ms","start":"2025-10-13T22:04:42.953203Z","end":"2025-10-13T22:04:43.078333Z","steps":["trace[1023813438] 'agreement among raft nodes before linearized reading'  (duration: 114.619246ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-13T22:04:43.078291Z","caller":"traceutil/trace.go:172","msg":"trace[1416416100] transaction","detail":"{read_only:false; response_revision:558; number_of_response:1; }","duration":"126.900256ms","start":"2025-10-13T22:04:42.951374Z","end":"2025-10-13T22:04:43.078275Z","steps":["trace[1416416100] 'process raft request'  (duration: 116.411456ms)","trace[1416416100] 'compare'  (duration: 10.380665ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-13T22:04:43.326043Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"147.324526ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/default/default-k8s-diff-port-505851.186e2c2c81dd20d9\" limit:1 ","response":"range_response_count:1 size:793"}
	{"level":"info","ts":"2025-10-13T22:04:43.326143Z","caller":"traceutil/trace.go:172","msg":"trace[151872554] range","detail":"{range_begin:/registry/events/default/default-k8s-diff-port-505851.186e2c2c81dd20d9; range_end:; response_count:1; response_revision:562; }","duration":"147.40721ms","start":"2025-10-13T22:04:43.178686Z","end":"2025-10-13T22:04:43.326093Z","steps":["trace[151872554] 'range keys from in-memory index tree'  (duration: 147.157635ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-13T22:04:43.459219Z","caller":"traceutil/trace.go:172","msg":"trace[482107203] transaction","detail":"{read_only:false; response_revision:563; number_of_response:1; }","duration":"130.963985ms","start":"2025-10-13T22:04:43.328235Z","end":"2025-10-13T22:04:43.459199Z","steps":["trace[482107203] 'process raft request'  (duration: 130.832202ms)"],"step_count":1}
	
	
	==> kernel <==
	 22:05:34 up  1:48,  0 user,  load average: 6.20, 4.31, 5.88
	Linux default-k8s-diff-port-505851 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [56588477c61cdaf31579516f71a44486912511726118d920501dc6964a03af29] <==
	I1013 22:04:37.641829       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1013 22:04:37.642298       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1013 22:04:37.642537       1 main.go:148] setting mtu 1500 for CNI 
	I1013 22:04:37.642561       1 main.go:178] kindnetd IP family: "ipv4"
	I1013 22:04:37.642582       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-13T22:04:37Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1013 22:04:37.888589       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1013 22:04:37.889283       1 controller.go:381] "Waiting for informer caches to sync"
	I1013 22:04:37.889312       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1013 22:04:37.889466       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1013 22:04:38.440712       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1013 22:04:38.440783       1 metrics.go:72] Registering metrics
	I1013 22:04:38.440873       1 controller.go:711] "Syncing nftables rules"
	I1013 22:04:47.889251       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1013 22:04:47.889316       1 main.go:301] handling current node
	I1013 22:04:57.891127       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1013 22:04:57.891176       1 main.go:301] handling current node
	I1013 22:05:07.889365       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1013 22:05:07.889412       1 main.go:301] handling current node
	I1013 22:05:17.889055       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1013 22:05:17.889093       1 main.go:301] handling current node
	I1013 22:05:27.888973       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1013 22:05:27.889048       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a4123f94280435b49d4a87e687509166fcba7b0fb561e6b74a0f94b565fb9fc7] <==
	I1013 22:04:36.368675       1 autoregister_controller.go:144] Starting autoregister controller
	I1013 22:04:36.368680       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1013 22:04:36.368686       1 cache.go:39] Caches are synced for autoregister controller
	I1013 22:04:36.368507       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1013 22:04:36.368925       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1013 22:04:36.369029       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1013 22:04:36.369071       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	E1013 22:04:36.375236       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1013 22:04:36.375898       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1013 22:04:36.398243       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1013 22:04:36.408585       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1013 22:04:36.408611       1 policy_source.go:240] refreshing policies
	I1013 22:04:36.424233       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1013 22:04:36.631873       1 controller.go:667] quota admission added evaluator for: namespaces
	I1013 22:04:36.667303       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1013 22:04:36.687384       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1013 22:04:36.696262       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1013 22:04:36.703583       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1013 22:04:36.750144       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.146.96"}
	I1013 22:04:36.762317       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.209.166"}
	I1013 22:04:37.271819       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1013 22:04:40.170304       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1013 22:04:40.219677       1 controller.go:667] quota admission added evaluator for: endpoints
	I1013 22:04:40.219686       1 controller.go:667] quota admission added evaluator for: endpoints
	I1013 22:04:40.273136       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [adda782c2ba2a3f6139979f78f26db41eb8daa3211f0cadcb2a7c82193618fea] <==
	I1013 22:04:39.686373       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1013 22:04:39.697678       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1013 22:04:39.703059       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 22:04:39.703080       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1013 22:04:39.703090       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1013 22:04:39.715623       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1013 22:04:39.715648       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1013 22:04:39.715834       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1013 22:04:39.715871       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1013 22:04:39.715909       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1013 22:04:39.716033       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1013 22:04:39.716144       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1013 22:04:39.716214       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1013 22:04:39.716405       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1013 22:04:39.716588       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1013 22:04:39.718037       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1013 22:04:39.720287       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1013 22:04:39.723590       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1013 22:04:39.725972       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1013 22:04:39.726055       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1013 22:04:39.726141       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-505851"
	I1013 22:04:39.726190       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1013 22:04:39.728398       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1013 22:04:39.731690       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1013 22:04:39.746176       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [73688ac34163745dfcaf8e03c5c6a54a4c91a87cb7741b6e20dcbece59db29e5] <==
	I1013 22:04:37.458545       1 server_linux.go:53] "Using iptables proxy"
	I1013 22:04:37.514553       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1013 22:04:37.614908       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1013 22:04:37.614945       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1013 22:04:37.615072       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1013 22:04:37.634241       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1013 22:04:37.634294       1 server_linux.go:132] "Using iptables Proxier"
	I1013 22:04:37.639482       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1013 22:04:37.639974       1 server.go:527] "Version info" version="v1.34.1"
	I1013 22:04:37.640026       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 22:04:37.641555       1 config.go:106] "Starting endpoint slice config controller"
	I1013 22:04:37.641574       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1013 22:04:37.641612       1 config.go:200] "Starting service config controller"
	I1013 22:04:37.641619       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1013 22:04:37.641649       1 config.go:309] "Starting node config controller"
	I1013 22:04:37.641661       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1013 22:04:37.641670       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1013 22:04:37.641673       1 config.go:403] "Starting serviceCIDR config controller"
	I1013 22:04:37.641693       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1013 22:04:37.742106       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1013 22:04:37.742118       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1013 22:04:37.742124       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [90e2257cdef169aad8152d89754d028b3f47ff10734cdbe1fc2a91ee1d85145e] <==
	I1013 22:04:35.665355       1 serving.go:386] Generated self-signed cert in-memory
	W1013 22:04:36.322532       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1013 22:04:36.322670       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found, role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found]
	W1013 22:04:36.322690       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1013 22:04:36.322701       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1013 22:04:36.343950       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1013 22:04:36.343979       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 22:04:36.347428       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 22:04:36.347478       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 22:04:36.347657       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1013 22:04:36.347972       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1013 22:04:36.447974       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 13 22:04:40 default-k8s-diff-port-505851 kubelet[719]: I1013 22:04:40.440782     719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/0db45673-8b9a-4762-9a55-139be862516b-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-k87hj\" (UID: \"0db45673-8b9a-4762-9a55-139be862516b\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k87hj"
	Oct 13 22:04:44 default-k8s-diff-port-505851 kubelet[719]: I1013 22:04:44.834168     719 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 13 22:04:45 default-k8s-diff-port-505851 kubelet[719]: I1013 22:04:45.093902     719 scope.go:117] "RemoveContainer" containerID="cf4ac7251df0d96daa1fe9582a548e71c97f763e8db76b6afece153a2be76ac4"
	Oct 13 22:04:46 default-k8s-diff-port-505851 kubelet[719]: I1013 22:04:46.098602     719 scope.go:117] "RemoveContainer" containerID="cf4ac7251df0d96daa1fe9582a548e71c97f763e8db76b6afece153a2be76ac4"
	Oct 13 22:04:46 default-k8s-diff-port-505851 kubelet[719]: I1013 22:04:46.098759     719 scope.go:117] "RemoveContainer" containerID="4f25936df743ff4c35d0faa599504b74c2e0654ccc9bf715f073dbac179b0ab8"
	Oct 13 22:04:46 default-k8s-diff-port-505851 kubelet[719]: E1013 22:04:46.098940     719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-k87hj_kubernetes-dashboard(0db45673-8b9a-4762-9a55-139be862516b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k87hj" podUID="0db45673-8b9a-4762-9a55-139be862516b"
	Oct 13 22:04:47 default-k8s-diff-port-505851 kubelet[719]: I1013 22:04:47.103859     719 scope.go:117] "RemoveContainer" containerID="4f25936df743ff4c35d0faa599504b74c2e0654ccc9bf715f073dbac179b0ab8"
	Oct 13 22:04:47 default-k8s-diff-port-505851 kubelet[719]: E1013 22:04:47.104049     719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-k87hj_kubernetes-dashboard(0db45673-8b9a-4762-9a55-139be862516b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k87hj" podUID="0db45673-8b9a-4762-9a55-139be862516b"
	Oct 13 22:04:49 default-k8s-diff-port-505851 kubelet[719]: I1013 22:04:49.124057     719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-2xpgc" podStartSLOduration=1.379476161 podStartE2EDuration="9.124031898s" podCreationTimestamp="2025-10-13 22:04:40 +0000 UTC" firstStartedPulling="2025-10-13 22:04:40.674506999 +0000 UTC m=+6.745719545" lastFinishedPulling="2025-10-13 22:04:48.419062737 +0000 UTC m=+14.490275282" observedRunningTime="2025-10-13 22:04:49.123842822 +0000 UTC m=+15.195055384" watchObservedRunningTime="2025-10-13 22:04:49.124031898 +0000 UTC m=+15.195244462"
	Oct 13 22:04:51 default-k8s-diff-port-505851 kubelet[719]: I1013 22:04:51.359062     719 scope.go:117] "RemoveContainer" containerID="4f25936df743ff4c35d0faa599504b74c2e0654ccc9bf715f073dbac179b0ab8"
	Oct 13 22:04:51 default-k8s-diff-port-505851 kubelet[719]: E1013 22:04:51.359277     719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-k87hj_kubernetes-dashboard(0db45673-8b9a-4762-9a55-139be862516b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k87hj" podUID="0db45673-8b9a-4762-9a55-139be862516b"
	Oct 13 22:05:04 default-k8s-diff-port-505851 kubelet[719]: I1013 22:05:04.031906     719 scope.go:117] "RemoveContainer" containerID="4f25936df743ff4c35d0faa599504b74c2e0654ccc9bf715f073dbac179b0ab8"
	Oct 13 22:05:04 default-k8s-diff-port-505851 kubelet[719]: I1013 22:05:04.160190     719 scope.go:117] "RemoveContainer" containerID="4f25936df743ff4c35d0faa599504b74c2e0654ccc9bf715f073dbac179b0ab8"
	Oct 13 22:05:04 default-k8s-diff-port-505851 kubelet[719]: I1013 22:05:04.160552     719 scope.go:117] "RemoveContainer" containerID="ae38e1db9769544ad8187b6bca19aaae3cebfcbaec340f2d13559004fffb61c7"
	Oct 13 22:05:04 default-k8s-diff-port-505851 kubelet[719]: E1013 22:05:04.160734     719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-k87hj_kubernetes-dashboard(0db45673-8b9a-4762-9a55-139be862516b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k87hj" podUID="0db45673-8b9a-4762-9a55-139be862516b"
	Oct 13 22:05:08 default-k8s-diff-port-505851 kubelet[719]: I1013 22:05:08.175427     719 scope.go:117] "RemoveContainer" containerID="648d22473246e720757b31210010e94963b26e5ee7e4f4e57448c809e9ec4c59"
	Oct 13 22:05:11 default-k8s-diff-port-505851 kubelet[719]: I1013 22:05:11.358808     719 scope.go:117] "RemoveContainer" containerID="ae38e1db9769544ad8187b6bca19aaae3cebfcbaec340f2d13559004fffb61c7"
	Oct 13 22:05:11 default-k8s-diff-port-505851 kubelet[719]: E1013 22:05:11.359147     719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-k87hj_kubernetes-dashboard(0db45673-8b9a-4762-9a55-139be862516b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k87hj" podUID="0db45673-8b9a-4762-9a55-139be862516b"
	Oct 13 22:05:23 default-k8s-diff-port-505851 kubelet[719]: I1013 22:05:23.029047     719 scope.go:117] "RemoveContainer" containerID="ae38e1db9769544ad8187b6bca19aaae3cebfcbaec340f2d13559004fffb61c7"
	Oct 13 22:05:23 default-k8s-diff-port-505851 kubelet[719]: E1013 22:05:23.029254     719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-k87hj_kubernetes-dashboard(0db45673-8b9a-4762-9a55-139be862516b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k87hj" podUID="0db45673-8b9a-4762-9a55-139be862516b"
	Oct 13 22:05:29 default-k8s-diff-port-505851 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 13 22:05:29 default-k8s-diff-port-505851 kubelet[719]: I1013 22:05:29.657065     719 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 13 22:05:29 default-k8s-diff-port-505851 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 13 22:05:29 default-k8s-diff-port-505851 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 13 22:05:29 default-k8s-diff-port-505851 systemd[1]: kubelet.service: Consumed 1.882s CPU time.
	
	
	==> kubernetes-dashboard [e976a19b88a83fe02afbf94aefc984bcec5775ad24483eea6e341b91a0ab5470] <==
	2025/10/13 22:04:48 Starting overwatch
	2025/10/13 22:04:48 Using namespace: kubernetes-dashboard
	2025/10/13 22:04:48 Using in-cluster config to connect to apiserver
	2025/10/13 22:04:48 Using secret token for csrf signing
	2025/10/13 22:04:48 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/13 22:04:48 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/13 22:04:48 Successful initial request to the apiserver, version: v1.34.1
	2025/10/13 22:04:48 Generating JWE encryption key
	2025/10/13 22:04:48 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/13 22:04:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/13 22:04:48 Initializing JWE encryption key from synchronized object
	2025/10/13 22:04:48 Creating in-cluster Sidecar client
	2025/10/13 22:04:48 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/13 22:04:48 Serving insecurely on HTTP port: 9090
	2025/10/13 22:05:18 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [62e1fa758a47ee529eab2178badec20856414d8ddeb60f0cc0c72ffdb14dc220] <==
	I1013 22:05:08.236783       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1013 22:05:08.247275       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1013 22:05:08.247351       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1013 22:05:08.250778       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:05:11.706338       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:05:15.966563       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:05:19.564771       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:05:22.618248       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:05:25.640811       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:05:25.646523       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1013 22:05:25.646698       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1013 22:05:25.646780       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fbdf3e78-bf34-43b3-8edf-a59e96e32243", APIVersion:"v1", ResourceVersion:"664", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-505851_808bbafc-0697-4df8-9489-3bd5acca0706 became leader
	I1013 22:05:25.646867       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-505851_808bbafc-0697-4df8-9489-3bd5acca0706!
	W1013 22:05:25.649207       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:05:25.652551       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1013 22:05:25.747472       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-505851_808bbafc-0697-4df8-9489-3bd5acca0706!
	W1013 22:05:27.655949       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:05:27.660384       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:05:29.664087       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:05:29.670067       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:05:31.673820       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:05:31.679326       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:05:33.682646       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:05:33.689348       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [648d22473246e720757b31210010e94963b26e5ee7e4f4e57448c809e9ec4c59] <==
	I1013 22:04:37.414738       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1013 22:05:07.419415       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-505851 -n default-k8s-diff-port-505851
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-505851 -n default-k8s-diff-port-505851: exit status 2 (361.444307ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-505851 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (6.62s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (6.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-521669 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p embed-certs-521669 --alsologtostderr -v=1: exit status 80 (2.15333729s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-521669 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 22:05:49.487129  520076 out.go:360] Setting OutFile to fd 1 ...
	I1013 22:05:49.487467  520076 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:05:49.487479  520076 out.go:374] Setting ErrFile to fd 2...
	I1013 22:05:49.487485  520076 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:05:49.487808  520076 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-226873/.minikube/bin
	I1013 22:05:49.488178  520076 out.go:368] Setting JSON to false
	I1013 22:05:49.488236  520076 mustload.go:65] Loading cluster: embed-certs-521669
	I1013 22:05:49.488645  520076 config.go:182] Loaded profile config "embed-certs-521669": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:05:49.489851  520076 cli_runner.go:164] Run: docker container inspect embed-certs-521669 --format={{.State.Status}}
	I1013 22:05:49.513283  520076 host.go:66] Checking if "embed-certs-521669" exists ...
	I1013 22:05:49.513777  520076 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 22:05:49.603864  520076 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:86 SystemTime:2025-10-13 22:05:49.587382392 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652166656 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1013 22:05:49.604713  520076 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1758198818-20370/minikube-v1.37.0-1758198818-20370-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1758198818-20370-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-521669 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1013 22:05:49.607081  520076 out.go:179] * Pausing node embed-certs-521669 ... 
	I1013 22:05:49.608424  520076 host.go:66] Checking if "embed-certs-521669" exists ...
	I1013 22:05:49.608750  520076 ssh_runner.go:195] Run: systemctl --version
	I1013 22:05:49.608786  520076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-521669
	I1013 22:05:49.633737  520076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/embed-certs-521669/id_rsa Username:docker}
	I1013 22:05:49.754895  520076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 22:05:49.782394  520076 pause.go:52] kubelet running: true
	I1013 22:05:49.782553  520076 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1013 22:05:50.008504  520076 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1013 22:05:50.008623  520076 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1013 22:05:50.103603  520076 cri.go:89] found id: "f8588d53d142a704b6a3313145b02df3eb18b2272fb5de5e687eadb80a950b3b"
	I1013 22:05:50.103643  520076 cri.go:89] found id: "f1a2082cf98ada2575c55be51a887e685d88ce434c06f68f0414e5a4d53bbaba"
	I1013 22:05:50.103650  520076 cri.go:89] found id: "1f49063ffccfd9f6190201e8082032d4920f99c8dc4110db28267978196f15df"
	I1013 22:05:50.103654  520076 cri.go:89] found id: "942193e0f8e228dbe430e60585172509fea39415b4683743cc8575fdd693853a"
	I1013 22:05:50.103658  520076 cri.go:89] found id: "51119e820cd1b0834228a2770ec00edf3d21ca637bc49ffae945718586b6a219"
	I1013 22:05:50.103663  520076 cri.go:89] found id: "fdd62b2d9b12e7b64a03352f0d267662da3aa571a99ec9ecfb273dbe33b29f29"
	I1013 22:05:50.103667  520076 cri.go:89] found id: "dd6ca47e50d2cbd68431e1f5ab00c476734d1abae5ea035e8079056054b006bb"
	I1013 22:05:50.103670  520076 cri.go:89] found id: "aee0c2d478b28876e3d4fc00fe5f4d69ca458ac596bdc766a2e18070947e0fc8"
	I1013 22:05:50.103674  520076 cri.go:89] found id: "9380e5f9e72fadb5e073fb6200b1804c022f9df9694c1163e541594da8527714"
	I1013 22:05:50.103698  520076 cri.go:89] found id: "32890e23034691fcd8995f2c2f36cdf5b876b33ba6b110ee02ffd7a8a5b1b672"
	I1013 22:05:50.103707  520076 cri.go:89] found id: "ddc954a1f166be754e4eb7e65b3e26d4f213b366dfcb0dee4876ade24670515c"
	I1013 22:05:50.103711  520076 cri.go:89] found id: ""
	I1013 22:05:50.103779  520076 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 22:05:50.120431  520076 retry.go:31] will retry after 311.827775ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:05:50Z" level=error msg="open /run/runc: no such file or directory"
	I1013 22:05:50.433029  520076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 22:05:50.447983  520076 pause.go:52] kubelet running: false
	I1013 22:05:50.448101  520076 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1013 22:05:50.626571  520076 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1013 22:05:50.626659  520076 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1013 22:05:50.709667  520076 cri.go:89] found id: "f8588d53d142a704b6a3313145b02df3eb18b2272fb5de5e687eadb80a950b3b"
	I1013 22:05:50.709693  520076 cri.go:89] found id: "f1a2082cf98ada2575c55be51a887e685d88ce434c06f68f0414e5a4d53bbaba"
	I1013 22:05:50.709697  520076 cri.go:89] found id: "1f49063ffccfd9f6190201e8082032d4920f99c8dc4110db28267978196f15df"
	I1013 22:05:50.709701  520076 cri.go:89] found id: "942193e0f8e228dbe430e60585172509fea39415b4683743cc8575fdd693853a"
	I1013 22:05:50.709706  520076 cri.go:89] found id: "51119e820cd1b0834228a2770ec00edf3d21ca637bc49ffae945718586b6a219"
	I1013 22:05:50.709711  520076 cri.go:89] found id: "fdd62b2d9b12e7b64a03352f0d267662da3aa571a99ec9ecfb273dbe33b29f29"
	I1013 22:05:50.709716  520076 cri.go:89] found id: "dd6ca47e50d2cbd68431e1f5ab00c476734d1abae5ea035e8079056054b006bb"
	I1013 22:05:50.709720  520076 cri.go:89] found id: "aee0c2d478b28876e3d4fc00fe5f4d69ca458ac596bdc766a2e18070947e0fc8"
	I1013 22:05:50.709724  520076 cri.go:89] found id: "9380e5f9e72fadb5e073fb6200b1804c022f9df9694c1163e541594da8527714"
	I1013 22:05:50.709733  520076 cri.go:89] found id: "32890e23034691fcd8995f2c2f36cdf5b876b33ba6b110ee02ffd7a8a5b1b672"
	I1013 22:05:50.709737  520076 cri.go:89] found id: "ddc954a1f166be754e4eb7e65b3e26d4f213b366dfcb0dee4876ade24670515c"
	I1013 22:05:50.709740  520076 cri.go:89] found id: ""
	I1013 22:05:50.709788  520076 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 22:05:50.725475  520076 retry.go:31] will retry after 538.888453ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:05:50Z" level=error msg="open /run/runc: no such file or directory"
	I1013 22:05:51.265212  520076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 22:05:51.280846  520076 pause.go:52] kubelet running: false
	I1013 22:05:51.280900  520076 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1013 22:05:51.454612  520076 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1013 22:05:51.454746  520076 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1013 22:05:51.541499  520076 cri.go:89] found id: "f8588d53d142a704b6a3313145b02df3eb18b2272fb5de5e687eadb80a950b3b"
	I1013 22:05:51.541527  520076 cri.go:89] found id: "f1a2082cf98ada2575c55be51a887e685d88ce434c06f68f0414e5a4d53bbaba"
	I1013 22:05:51.541532  520076 cri.go:89] found id: "1f49063ffccfd9f6190201e8082032d4920f99c8dc4110db28267978196f15df"
	I1013 22:05:51.541537  520076 cri.go:89] found id: "942193e0f8e228dbe430e60585172509fea39415b4683743cc8575fdd693853a"
	I1013 22:05:51.541541  520076 cri.go:89] found id: "51119e820cd1b0834228a2770ec00edf3d21ca637bc49ffae945718586b6a219"
	I1013 22:05:51.541546  520076 cri.go:89] found id: "fdd62b2d9b12e7b64a03352f0d267662da3aa571a99ec9ecfb273dbe33b29f29"
	I1013 22:05:51.541549  520076 cri.go:89] found id: "dd6ca47e50d2cbd68431e1f5ab00c476734d1abae5ea035e8079056054b006bb"
	I1013 22:05:51.541562  520076 cri.go:89] found id: "aee0c2d478b28876e3d4fc00fe5f4d69ca458ac596bdc766a2e18070947e0fc8"
	I1013 22:05:51.541566  520076 cri.go:89] found id: "9380e5f9e72fadb5e073fb6200b1804c022f9df9694c1163e541594da8527714"
	I1013 22:05:51.541574  520076 cri.go:89] found id: "32890e23034691fcd8995f2c2f36cdf5b876b33ba6b110ee02ffd7a8a5b1b672"
	I1013 22:05:51.541577  520076 cri.go:89] found id: "ddc954a1f166be754e4eb7e65b3e26d4f213b366dfcb0dee4876ade24670515c"
	I1013 22:05:51.541581  520076 cri.go:89] found id: ""
	I1013 22:05:51.541625  520076 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 22:05:51.558591  520076 out.go:203] 
	W1013 22:05:51.560113  520076 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:05:51Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:05:51Z" level=error msg="open /run/runc: no such file or directory"
	
	W1013 22:05:51.560205  520076 out.go:285] * 
	* 
	W1013 22:05:51.565101  520076 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1013 22:05:51.566470  520076 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p embed-certs-521669 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-521669
helpers_test.go:243: (dbg) docker inspect embed-certs-521669:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1baa373eead7e68f171d088aace3c344d1736cf1920b8b49bced7115bf5dc203",
	        "Created": "2025-10-13T22:03:15.556123483Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 505502,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-13T22:04:48.791654166Z",
	            "FinishedAt": "2025-10-13T22:04:47.294015675Z"
	        },
	        "Image": "sha256:496f866fde942b6f85dc3d23428f27ed11a5cd69b522133c43b8dc97e7575c9e",
	        "ResolvConfPath": "/var/lib/docker/containers/1baa373eead7e68f171d088aace3c344d1736cf1920b8b49bced7115bf5dc203/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1baa373eead7e68f171d088aace3c344d1736cf1920b8b49bced7115bf5dc203/hostname",
	        "HostsPath": "/var/lib/docker/containers/1baa373eead7e68f171d088aace3c344d1736cf1920b8b49bced7115bf5dc203/hosts",
	        "LogPath": "/var/lib/docker/containers/1baa373eead7e68f171d088aace3c344d1736cf1920b8b49bced7115bf5dc203/1baa373eead7e68f171d088aace3c344d1736cf1920b8b49bced7115bf5dc203-json.log",
	        "Name": "/embed-certs-521669",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-521669:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-521669",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1baa373eead7e68f171d088aace3c344d1736cf1920b8b49bced7115bf5dc203",
	                "LowerDir": "/var/lib/docker/overlay2/3a20280ab14381960ae7156d30bd7b2fa35423fe9a356df896c104f200bd64da-init/diff:/var/lib/docker/overlay2/d6236b573fee274727d414fe7dfb0718c2f1a4b8ebed995b4196b3231a8d31a4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3a20280ab14381960ae7156d30bd7b2fa35423fe9a356df896c104f200bd64da/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3a20280ab14381960ae7156d30bd7b2fa35423fe9a356df896c104f200bd64da/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3a20280ab14381960ae7156d30bd7b2fa35423fe9a356df896c104f200bd64da/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-521669",
	                "Source": "/var/lib/docker/volumes/embed-certs-521669/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-521669",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-521669",
	                "name.minikube.sigs.k8s.io": "embed-certs-521669",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c12db32de2341b9b4526a9ee42b76d0b6bc3e0e6bd3e6518554950e96b3a3617",
	            "SandboxKey": "/var/run/docker/netns/c12db32de234",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33108"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33109"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33112"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33110"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33111"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-521669": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5a:9c:e8:76:0a:ec",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "50800b9f1c9d1d3bc768e42eef173bae32c640bbf4383e5f2ce56c38ad7a7349",
	                    "EndpointID": "628ab4c06aaf3f28d03a4474ae3f2dfbb8611624f650fa20553ff43dea62afe3",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-521669",
	                        "1baa373eead7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-521669 -n embed-certs-521669
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-521669 -n embed-certs-521669: exit status 2 (388.448232ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-521669 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-521669 logs -n 25: (1.284779502s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                ARGS                                                                                │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p auto-200102 sudo containerd config dump                                                                                                                         │ auto-200102                  │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │ 13 Oct 25 22:04 UTC │
	│ ssh     │ -p auto-200102 sudo systemctl status crio --all --full --no-pager                                                                                                  │ auto-200102                  │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │ 13 Oct 25 22:04 UTC │
	│ ssh     │ -p auto-200102 sudo systemctl cat crio --no-pager                                                                                                                  │ auto-200102                  │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │ 13 Oct 25 22:04 UTC │
	│ ssh     │ -p auto-200102 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                        │ auto-200102                  │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │ 13 Oct 25 22:04 UTC │
	│ ssh     │ -p auto-200102 sudo crio config                                                                                                                                    │ auto-200102                  │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │ 13 Oct 25 22:04 UTC │
	│ delete  │ -p auto-200102                                                                                                                                                     │ auto-200102                  │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │ 13 Oct 25 22:04 UTC │
	│ start   │ -p calico-200102 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio                             │ calico-200102                │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │ 13 Oct 25 22:05 UTC │
	│ image   │ default-k8s-diff-port-505851 image list --format=json                                                                                                              │ default-k8s-diff-port-505851 │ jenkins │ v1.37.0 │ 13 Oct 25 22:05 UTC │ 13 Oct 25 22:05 UTC │
	│ pause   │ -p default-k8s-diff-port-505851 --alsologtostderr -v=1                                                                                                             │ default-k8s-diff-port-505851 │ jenkins │ v1.37.0 │ 13 Oct 25 22:05 UTC │                     │
	│ ssh     │ -p kindnet-200102 pgrep -a kubelet                                                                                                                                 │ kindnet-200102               │ jenkins │ v1.37.0 │ 13 Oct 25 22:05 UTC │ 13 Oct 25 22:05 UTC │
	│ delete  │ -p default-k8s-diff-port-505851                                                                                                                                    │ default-k8s-diff-port-505851 │ jenkins │ v1.37.0 │ 13 Oct 25 22:05 UTC │ 13 Oct 25 22:05 UTC │
	│ delete  │ -p default-k8s-diff-port-505851                                                                                                                                    │ default-k8s-diff-port-505851 │ jenkins │ v1.37.0 │ 13 Oct 25 22:05 UTC │ 13 Oct 25 22:05 UTC │
	│ start   │ -p custom-flannel-200102 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio │ custom-flannel-200102        │ jenkins │ v1.37.0 │ 13 Oct 25 22:05 UTC │                     │
	│ ssh     │ -p kindnet-200102 sudo cat /etc/nsswitch.conf                                                                                                                      │ kindnet-200102               │ jenkins │ v1.37.0 │ 13 Oct 25 22:05 UTC │ 13 Oct 25 22:05 UTC │
	│ ssh     │ -p kindnet-200102 sudo cat /etc/hosts                                                                                                                              │ kindnet-200102               │ jenkins │ v1.37.0 │ 13 Oct 25 22:05 UTC │ 13 Oct 25 22:05 UTC │
	│ ssh     │ -p kindnet-200102 sudo cat /etc/resolv.conf                                                                                                                        │ kindnet-200102               │ jenkins │ v1.37.0 │ 13 Oct 25 22:05 UTC │ 13 Oct 25 22:05 UTC │
	│ ssh     │ -p kindnet-200102 sudo crictl pods                                                                                                                                 │ kindnet-200102               │ jenkins │ v1.37.0 │ 13 Oct 25 22:05 UTC │ 13 Oct 25 22:05 UTC │
	│ ssh     │ -p kindnet-200102 sudo crictl ps --all                                                                                                                             │ kindnet-200102               │ jenkins │ v1.37.0 │ 13 Oct 25 22:05 UTC │ 13 Oct 25 22:05 UTC │
	│ image   │ embed-certs-521669 image list --format=json                                                                                                                        │ embed-certs-521669           │ jenkins │ v1.37.0 │ 13 Oct 25 22:05 UTC │ 13 Oct 25 22:05 UTC │
	│ pause   │ -p embed-certs-521669 --alsologtostderr -v=1                                                                                                                       │ embed-certs-521669           │ jenkins │ v1.37.0 │ 13 Oct 25 22:05 UTC │                     │
	│ ssh     │ -p kindnet-200102 sudo find /etc/cni -type f -exec sh -c 'echo {}; cat {}' \;                                                                                      │ kindnet-200102               │ jenkins │ v1.37.0 │ 13 Oct 25 22:05 UTC │ 13 Oct 25 22:05 UTC │
	│ ssh     │ -p kindnet-200102 sudo ip a s                                                                                                                                      │ kindnet-200102               │ jenkins │ v1.37.0 │ 13 Oct 25 22:05 UTC │ 13 Oct 25 22:05 UTC │
	│ ssh     │ -p kindnet-200102 sudo ip r s                                                                                                                                      │ kindnet-200102               │ jenkins │ v1.37.0 │ 13 Oct 25 22:05 UTC │ 13 Oct 25 22:05 UTC │
	│ ssh     │ -p kindnet-200102 sudo iptables-save                                                                                                                               │ kindnet-200102               │ jenkins │ v1.37.0 │ 13 Oct 25 22:05 UTC │ 13 Oct 25 22:05 UTC │
	│ ssh     │ -p kindnet-200102 sudo iptables -t nat -L -n -v                                                                                                                    │ kindnet-200102               │ jenkins │ v1.37.0 │ 13 Oct 25 22:05 UTC │ 13 Oct 25 22:05 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 22:05:40
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 22:05:40.610020  517273 out.go:360] Setting OutFile to fd 1 ...
	I1013 22:05:40.610190  517273 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:05:40.610199  517273 out.go:374] Setting ErrFile to fd 2...
	I1013 22:05:40.610203  517273 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:05:40.610437  517273 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-226873/.minikube/bin
	I1013 22:05:40.610958  517273 out.go:368] Setting JSON to false
	I1013 22:05:40.612470  517273 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6489,"bootTime":1760386652,"procs":332,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1013 22:05:40.612587  517273 start.go:141] virtualization: kvm guest
	I1013 22:05:40.614884  517273 out.go:179] * [custom-flannel-200102] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1013 22:05:40.616582  517273 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 22:05:40.616616  517273 notify.go:220] Checking for updates...
	I1013 22:05:40.619118  517273 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 22:05:40.620818  517273 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-226873/kubeconfig
	I1013 22:05:40.622041  517273 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-226873/.minikube
	I1013 22:05:40.623310  517273 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1013 22:05:40.624717  517273 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 22:05:40.626497  517273 config.go:182] Loaded profile config "calico-200102": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:05:40.626609  517273 config.go:182] Loaded profile config "embed-certs-521669": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:05:40.626700  517273 config.go:182] Loaded profile config "kindnet-200102": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:05:40.626817  517273 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 22:05:40.650970  517273 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1013 22:05:40.651085  517273 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 22:05:40.709561  517273 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-13 22:05:40.699557614 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652166656 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1013 22:05:40.709677  517273 docker.go:318] overlay module found
	I1013 22:05:40.711632  517273 out.go:179] * Using the docker driver based on user configuration
	I1013 22:05:40.713135  517273 start.go:305] selected driver: docker
	I1013 22:05:40.713153  517273 start.go:925] validating driver "docker" against <nil>
	I1013 22:05:40.713164  517273 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 22:05:40.713806  517273 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 22:05:40.771765  517273 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-13 22:05:40.762290223 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652166656 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1013 22:05:40.772009  517273 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1013 22:05:40.772318  517273 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 22:05:40.774402  517273 out.go:179] * Using Docker driver with root privileges
	I1013 22:05:40.775742  517273 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1013 22:05:40.775801  517273 start_flags.go:336] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I1013 22:05:40.775882  517273 start.go:349] cluster config:
	{Name:custom-flannel-200102 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-200102 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 22:05:40.777238  517273 out.go:179] * Starting "custom-flannel-200102" primary control-plane node in "custom-flannel-200102" cluster
	I1013 22:05:40.778453  517273 cache.go:123] Beginning downloading kic base image for docker with crio
	I1013 22:05:40.779700  517273 out.go:179] * Pulling base image v0.0.48-1760363564-21724 ...
	I1013 22:05:40.780911  517273 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 22:05:40.780956  517273 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-226873/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1013 22:05:40.780984  517273 cache.go:58] Caching tarball of preloaded images
	I1013 22:05:40.781043  517273 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon
	I1013 22:05:40.781127  517273 preload.go:233] Found /home/jenkins/minikube-integration/21724-226873/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1013 22:05:40.781144  517273 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1013 22:05:40.781270  517273 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/custom-flannel-200102/config.json ...
	I1013 22:05:40.781295  517273 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/custom-flannel-200102/config.json: {Name:mk07a72dbdb2ec66cf7c88827d8cab605e23d904 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:05:40.802555  517273 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon, skipping pull
	I1013 22:05:40.802579  517273 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 exists in daemon, skipping load
	I1013 22:05:40.802595  517273 cache.go:232] Successfully downloaded all kic artifacts
	I1013 22:05:40.802619  517273 start.go:360] acquireMachinesLock for custom-flannel-200102: {Name:mkcd003ae0d506525f7ece13c5a148a7bc023af9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 22:05:40.802729  517273 start.go:364] duration metric: took 79.219µs to acquireMachinesLock for "custom-flannel-200102"
	I1013 22:05:40.802754  517273 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-200102 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-200102 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1013 22:05:40.802816  517273 start.go:125] createHost starting for "" (driver="docker")
	I1013 22:05:38.783437  510068 system_pods.go:86] 9 kube-system pods found
	I1013 22:05:38.783480  510068 system_pods.go:89] "calico-kube-controllers-59556d9b4c-kvkr8" [73c85800-ccdd-4d93-bbe7-3a214d9c23e7] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1013 22:05:38.783492  510068 system_pods.go:89] "calico-node-r6ts6" [04357e44-6783-45c3-8951-e76ac35971d5] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1013 22:05:38.783502  510068 system_pods.go:89] "coredns-66bc5c9577-6bk7g" [d4902451-b2ff-4e5e-9a1c-1c832aada996] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 22:05:38.783507  510068 system_pods.go:89] "etcd-calico-200102" [2b4c3f03-0ecf-45c9-b6d2-9b4c4d6099f2] Running
	I1013 22:05:38.783514  510068 system_pods.go:89] "kube-apiserver-calico-200102" [95748a10-7102-4dee-97c1-478d48736094] Running
	I1013 22:05:38.783519  510068 system_pods.go:89] "kube-controller-manager-calico-200102" [59284bf2-36f8-413e-921e-c2a55aaf4885] Running
	I1013 22:05:38.783524  510068 system_pods.go:89] "kube-proxy-ggd54" [d80e6296-eb8d-429d-8fe5-c44b12c06329] Running
	I1013 22:05:38.783529  510068 system_pods.go:89] "kube-scheduler-calico-200102" [30cf2518-f590-4214-ae04-db8be6dff43f] Running
	I1013 22:05:38.783534  510068 system_pods.go:89] "storage-provisioner" [775a2fbe-c5b3-4080-8645-298635b852a3] Running
	I1013 22:05:38.783553  510068 retry.go:31] will retry after 1.784343518s: missing components: kube-dns
	I1013 22:05:40.573104  510068 system_pods.go:86] 9 kube-system pods found
	I1013 22:05:40.573136  510068 system_pods.go:89] "calico-kube-controllers-59556d9b4c-kvkr8" [73c85800-ccdd-4d93-bbe7-3a214d9c23e7] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1013 22:05:40.573144  510068 system_pods.go:89] "calico-node-r6ts6" [04357e44-6783-45c3-8951-e76ac35971d5] Pending / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1013 22:05:40.573151  510068 system_pods.go:89] "coredns-66bc5c9577-6bk7g" [d4902451-b2ff-4e5e-9a1c-1c832aada996] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 22:05:40.573155  510068 system_pods.go:89] "etcd-calico-200102" [2b4c3f03-0ecf-45c9-b6d2-9b4c4d6099f2] Running
	I1013 22:05:40.573160  510068 system_pods.go:89] "kube-apiserver-calico-200102" [95748a10-7102-4dee-97c1-478d48736094] Running
	I1013 22:05:40.573163  510068 system_pods.go:89] "kube-controller-manager-calico-200102" [59284bf2-36f8-413e-921e-c2a55aaf4885] Running
	I1013 22:05:40.573167  510068 system_pods.go:89] "kube-proxy-ggd54" [d80e6296-eb8d-429d-8fe5-c44b12c06329] Running
	I1013 22:05:40.573170  510068 system_pods.go:89] "kube-scheduler-calico-200102" [30cf2518-f590-4214-ae04-db8be6dff43f] Running
	I1013 22:05:40.573173  510068 system_pods.go:89] "storage-provisioner" [775a2fbe-c5b3-4080-8645-298635b852a3] Running
	I1013 22:05:40.573188  510068 retry.go:31] will retry after 1.675380625s: missing components: kube-dns
	I1013 22:05:42.254693  510068 system_pods.go:86] 9 kube-system pods found
	I1013 22:05:42.254754  510068 system_pods.go:89] "calico-kube-controllers-59556d9b4c-kvkr8" [73c85800-ccdd-4d93-bbe7-3a214d9c23e7] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1013 22:05:42.254768  510068 system_pods.go:89] "calico-node-r6ts6" [04357e44-6783-45c3-8951-e76ac35971d5] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1013 22:05:42.254792  510068 system_pods.go:89] "coredns-66bc5c9577-6bk7g" [d4902451-b2ff-4e5e-9a1c-1c832aada996] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 22:05:42.254801  510068 system_pods.go:89] "etcd-calico-200102" [2b4c3f03-0ecf-45c9-b6d2-9b4c4d6099f2] Running
	I1013 22:05:42.254811  510068 system_pods.go:89] "kube-apiserver-calico-200102" [95748a10-7102-4dee-97c1-478d48736094] Running
	I1013 22:05:42.254816  510068 system_pods.go:89] "kube-controller-manager-calico-200102" [59284bf2-36f8-413e-921e-c2a55aaf4885] Running
	I1013 22:05:42.254825  510068 system_pods.go:89] "kube-proxy-ggd54" [d80e6296-eb8d-429d-8fe5-c44b12c06329] Running
	I1013 22:05:42.254831  510068 system_pods.go:89] "kube-scheduler-calico-200102" [30cf2518-f590-4214-ae04-db8be6dff43f] Running
	I1013 22:05:42.254841  510068 system_pods.go:89] "storage-provisioner" [775a2fbe-c5b3-4080-8645-298635b852a3] Running
	I1013 22:05:42.254862  510068 retry.go:31] will retry after 2.7450669s: missing components: kube-dns
	I1013 22:05:40.805051  517273 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1013 22:05:40.805308  517273 start.go:159] libmachine.API.Create for "custom-flannel-200102" (driver="docker")
	I1013 22:05:40.805345  517273 client.go:168] LocalClient.Create starting
	I1013 22:05:40.805416  517273 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem
	I1013 22:05:40.805461  517273 main.go:141] libmachine: Decoding PEM data...
	I1013 22:05:40.805488  517273 main.go:141] libmachine: Parsing certificate...
	I1013 22:05:40.805579  517273 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21724-226873/.minikube/certs/cert.pem
	I1013 22:05:40.805612  517273 main.go:141] libmachine: Decoding PEM data...
	I1013 22:05:40.805627  517273 main.go:141] libmachine: Parsing certificate...
	I1013 22:05:40.806069  517273 cli_runner.go:164] Run: docker network inspect custom-flannel-200102 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1013 22:05:40.824133  517273 cli_runner.go:211] docker network inspect custom-flannel-200102 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1013 22:05:40.824216  517273 network_create.go:284] running [docker network inspect custom-flannel-200102] to gather additional debugging logs...
	I1013 22:05:40.824245  517273 cli_runner.go:164] Run: docker network inspect custom-flannel-200102
	W1013 22:05:40.841516  517273 cli_runner.go:211] docker network inspect custom-flannel-200102 returned with exit code 1
	I1013 22:05:40.841548  517273 network_create.go:287] error running [docker network inspect custom-flannel-200102]: docker network inspect custom-flannel-200102: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network custom-flannel-200102 not found
	I1013 22:05:40.841579  517273 network_create.go:289] output of [docker network inspect custom-flannel-200102]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network custom-flannel-200102 not found
	
	** /stderr **
	I1013 22:05:40.841787  517273 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 22:05:40.862232  517273 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7d83a8e6a805 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ea:69:47:54:f9:98} reservation:<nil>}
	I1013 22:05:40.863102  517273 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-35c0cecee577 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:f2:41:bc:f8:12:32} reservation:<nil>}
	I1013 22:05:40.863888  517273 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-2e951fbeb08e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ba:fb:be:51:da:97} reservation:<nil>}
	I1013 22:05:40.864702  517273 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ec07a0}
	I1013 22:05:40.864733  517273 network_create.go:124] attempt to create docker network custom-flannel-200102 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1013 22:05:40.864799  517273 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-200102 custom-flannel-200102
	I1013 22:05:40.931054  517273 network_create.go:108] docker network custom-flannel-200102 192.168.76.0/24 created
	I1013 22:05:40.931089  517273 kic.go:121] calculated static IP "192.168.76.2" for the "custom-flannel-200102" container
	I1013 22:05:40.931173  517273 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1013 22:05:40.951010  517273 cli_runner.go:164] Run: docker volume create custom-flannel-200102 --label name.minikube.sigs.k8s.io=custom-flannel-200102 --label created_by.minikube.sigs.k8s.io=true
	I1013 22:05:40.969487  517273 oci.go:103] Successfully created a docker volume custom-flannel-200102
	I1013 22:05:40.969591  517273 cli_runner.go:164] Run: docker run --rm --name custom-flannel-200102-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-200102 --entrypoint /usr/bin/test -v custom-flannel-200102:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -d /var/lib
	I1013 22:05:41.491036  517273 oci.go:107] Successfully prepared a docker volume custom-flannel-200102
	I1013 22:05:41.491102  517273 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 22:05:41.491128  517273 kic.go:194] Starting extracting preloaded images to volume ...
	I1013 22:05:41.491195  517273 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21724-226873/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v custom-flannel-200102:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -I lz4 -xf /preloaded.tar -C /extractDir
	I1013 22:05:45.005141  510068 system_pods.go:86] 9 kube-system pods found
	I1013 22:05:45.005185  510068 system_pods.go:89] "calico-kube-controllers-59556d9b4c-kvkr8" [73c85800-ccdd-4d93-bbe7-3a214d9c23e7] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1013 22:05:45.005198  510068 system_pods.go:89] "calico-node-r6ts6" [04357e44-6783-45c3-8951-e76ac35971d5] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1013 22:05:45.005221  510068 system_pods.go:89] "coredns-66bc5c9577-6bk7g" [d4902451-b2ff-4e5e-9a1c-1c832aada996] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 22:05:45.005230  510068 system_pods.go:89] "etcd-calico-200102" [2b4c3f03-0ecf-45c9-b6d2-9b4c4d6099f2] Running
	I1013 22:05:45.005237  510068 system_pods.go:89] "kube-apiserver-calico-200102" [95748a10-7102-4dee-97c1-478d48736094] Running
	I1013 22:05:45.005244  510068 system_pods.go:89] "kube-controller-manager-calico-200102" [59284bf2-36f8-413e-921e-c2a55aaf4885] Running
	I1013 22:05:45.005249  510068 system_pods.go:89] "kube-proxy-ggd54" [d80e6296-eb8d-429d-8fe5-c44b12c06329] Running
	I1013 22:05:45.005258  510068 system_pods.go:89] "kube-scheduler-calico-200102" [30cf2518-f590-4214-ae04-db8be6dff43f] Running
	I1013 22:05:45.005267  510068 system_pods.go:89] "storage-provisioner" [775a2fbe-c5b3-4080-8645-298635b852a3] Running
	I1013 22:05:45.005302  510068 retry.go:31] will retry after 3.44311488s: missing components: kube-dns
	I1013 22:05:48.452986  510068 system_pods.go:86] 9 kube-system pods found
	I1013 22:05:48.453043  510068 system_pods.go:89] "calico-kube-controllers-59556d9b4c-kvkr8" [73c85800-ccdd-4d93-bbe7-3a214d9c23e7] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1013 22:05:48.453055  510068 system_pods.go:89] "calico-node-r6ts6" [04357e44-6783-45c3-8951-e76ac35971d5] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1013 22:05:48.453064  510068 system_pods.go:89] "coredns-66bc5c9577-6bk7g" [d4902451-b2ff-4e5e-9a1c-1c832aada996] Running
	I1013 22:05:48.453070  510068 system_pods.go:89] "etcd-calico-200102" [2b4c3f03-0ecf-45c9-b6d2-9b4c4d6099f2] Running
	I1013 22:05:48.453076  510068 system_pods.go:89] "kube-apiserver-calico-200102" [95748a10-7102-4dee-97c1-478d48736094] Running
	I1013 22:05:48.453082  510068 system_pods.go:89] "kube-controller-manager-calico-200102" [59284bf2-36f8-413e-921e-c2a55aaf4885] Running
	I1013 22:05:48.453089  510068 system_pods.go:89] "kube-proxy-ggd54" [d80e6296-eb8d-429d-8fe5-c44b12c06329] Running
	I1013 22:05:48.453097  510068 system_pods.go:89] "kube-scheduler-calico-200102" [30cf2518-f590-4214-ae04-db8be6dff43f] Running
	I1013 22:05:48.453101  510068 system_pods.go:89] "storage-provisioner" [775a2fbe-c5b3-4080-8645-298635b852a3] Running
	I1013 22:05:48.453110  510068 system_pods.go:126] duration metric: took 14.990653671s to wait for k8s-apps to be running ...
	I1013 22:05:48.453121  510068 system_svc.go:44] waiting for kubelet service to be running ....
	I1013 22:05:48.453169  510068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 22:05:48.468292  510068 system_svc.go:56] duration metric: took 15.159035ms WaitForService to wait for kubelet
	I1013 22:05:48.468323  510068 kubeadm.go:586] duration metric: took 19.871791057s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 22:05:48.468354  510068 node_conditions.go:102] verifying NodePressure condition ...
	I1013 22:05:48.472146  510068 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1013 22:05:48.472178  510068 node_conditions.go:123] node cpu capacity is 8
	I1013 22:05:48.472194  510068 node_conditions.go:105] duration metric: took 3.834382ms to run NodePressure ...
	I1013 22:05:48.472210  510068 start.go:241] waiting for startup goroutines ...
	I1013 22:05:48.472218  510068 start.go:246] waiting for cluster config update ...
	I1013 22:05:48.472231  510068 start.go:255] writing updated cluster config ...
	I1013 22:05:48.472550  510068 ssh_runner.go:195] Run: rm -f paused
	I1013 22:05:48.477490  510068 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 22:05:48.482020  510068 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-6bk7g" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:05:48.487231  510068 pod_ready.go:94] pod "coredns-66bc5c9577-6bk7g" is "Ready"
	I1013 22:05:48.487259  510068 pod_ready.go:86] duration metric: took 5.210749ms for pod "coredns-66bc5c9577-6bk7g" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:05:48.489611  510068 pod_ready.go:83] waiting for pod "etcd-calico-200102" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:05:48.494309  510068 pod_ready.go:94] pod "etcd-calico-200102" is "Ready"
	I1013 22:05:48.494335  510068 pod_ready.go:86] duration metric: took 4.687947ms for pod "etcd-calico-200102" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:05:48.496591  510068 pod_ready.go:83] waiting for pod "kube-apiserver-calico-200102" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:05:48.501003  510068 pod_ready.go:94] pod "kube-apiserver-calico-200102" is "Ready"
	I1013 22:05:48.501031  510068 pod_ready.go:86] duration metric: took 4.413264ms for pod "kube-apiserver-calico-200102" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:05:48.503134  510068 pod_ready.go:83] waiting for pod "kube-controller-manager-calico-200102" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:05:48.884101  510068 pod_ready.go:94] pod "kube-controller-manager-calico-200102" is "Ready"
	I1013 22:05:48.884136  510068 pod_ready.go:86] duration metric: took 380.982445ms for pod "kube-controller-manager-calico-200102" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:05:49.083279  510068 pod_ready.go:83] waiting for pod "kube-proxy-ggd54" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:05:49.483122  510068 pod_ready.go:94] pod "kube-proxy-ggd54" is "Ready"
	I1013 22:05:49.483153  510068 pod_ready.go:86] duration metric: took 399.845578ms for pod "kube-proxy-ggd54" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:05:49.684490  510068 pod_ready.go:83] waiting for pod "kube-scheduler-calico-200102" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:05:50.083026  510068 pod_ready.go:94] pod "kube-scheduler-calico-200102" is "Ready"
	I1013 22:05:50.083062  510068 pod_ready.go:86] duration metric: took 398.540722ms for pod "kube-scheduler-calico-200102" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:05:50.083085  510068 pod_ready.go:40] duration metric: took 1.605554156s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 22:05:50.143776  510068 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1013 22:05:50.145669  510068 out.go:179] * Done! kubectl is now configured to use "calico-200102" cluster and "default" namespace by default
	I1013 22:05:46.070036  517273 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21724-226873/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v custom-flannel-200102:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -I lz4 -xf /preloaded.tar -C /extractDir: (4.57877893s)
	I1013 22:05:46.070066  517273 kic.go:203] duration metric: took 4.578935394s to extract preloaded images to volume ...
	W1013 22:05:46.070166  517273 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1013 22:05:46.070196  517273 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1013 22:05:46.070236  517273 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1013 22:05:46.126795  517273 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname custom-flannel-200102 --name custom-flannel-200102 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-200102 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=custom-flannel-200102 --network custom-flannel-200102 --ip 192.168.76.2 --volume custom-flannel-200102:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225
	I1013 22:05:46.409060  517273 cli_runner.go:164] Run: docker container inspect custom-flannel-200102 --format={{.State.Running}}
	I1013 22:05:46.429128  517273 cli_runner.go:164] Run: docker container inspect custom-flannel-200102 --format={{.State.Status}}
	I1013 22:05:46.448596  517273 cli_runner.go:164] Run: docker exec custom-flannel-200102 stat /var/lib/dpkg/alternatives/iptables
	I1013 22:05:46.498290  517273 oci.go:144] the created container "custom-flannel-200102" has a running status.
	I1013 22:05:46.498321  517273 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21724-226873/.minikube/machines/custom-flannel-200102/id_rsa...
	I1013 22:05:46.728370  517273 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21724-226873/.minikube/machines/custom-flannel-200102/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1013 22:05:46.761531  517273 cli_runner.go:164] Run: docker container inspect custom-flannel-200102 --format={{.State.Status}}
	I1013 22:05:46.785351  517273 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1013 22:05:46.785382  517273 kic_runner.go:114] Args: [docker exec --privileged custom-flannel-200102 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1013 22:05:46.833458  517273 cli_runner.go:164] Run: docker container inspect custom-flannel-200102 --format={{.State.Status}}
	I1013 22:05:46.854376  517273 machine.go:93] provisionDockerMachine start ...
	I1013 22:05:46.854515  517273 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-200102
	I1013 22:05:46.875571  517273 main.go:141] libmachine: Using SSH client type: native
	I1013 22:05:46.875929  517273 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1013 22:05:46.875951  517273 main.go:141] libmachine: About to run SSH command:
	hostname
	I1013 22:05:47.041655  517273 main.go:141] libmachine: SSH cmd err, output: <nil>: custom-flannel-200102
	
	I1013 22:05:47.041694  517273 ubuntu.go:182] provisioning hostname "custom-flannel-200102"
	I1013 22:05:47.041767  517273 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-200102
	I1013 22:05:47.062135  517273 main.go:141] libmachine: Using SSH client type: native
	I1013 22:05:47.063452  517273 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1013 22:05:47.063538  517273 main.go:141] libmachine: About to run SSH command:
	sudo hostname custom-flannel-200102 && echo "custom-flannel-200102" | sudo tee /etc/hostname
	I1013 22:05:47.223239  517273 main.go:141] libmachine: SSH cmd err, output: <nil>: custom-flannel-200102
	
	I1013 22:05:47.223321  517273 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-200102
	I1013 22:05:47.243551  517273 main.go:141] libmachine: Using SSH client type: native
	I1013 22:05:47.243936  517273 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1013 22:05:47.243973  517273 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scustom-flannel-200102' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 custom-flannel-200102/g' /etc/hosts;
				else 
					echo '127.0.1.1 custom-flannel-200102' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1013 22:05:47.388934  517273 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 22:05:47.388964  517273 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21724-226873/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-226873/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-226873/.minikube}
	I1013 22:05:47.389050  517273 ubuntu.go:190] setting up certificates
	I1013 22:05:47.389064  517273 provision.go:84] configureAuth start
	I1013 22:05:47.389111  517273 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-200102
	I1013 22:05:47.410174  517273 provision.go:143] copyHostCerts
	I1013 22:05:47.410245  517273 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-226873/.minikube/cert.pem, removing ...
	I1013 22:05:47.410258  517273 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-226873/.minikube/cert.pem
	I1013 22:05:47.410349  517273 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-226873/.minikube/cert.pem (1123 bytes)
	I1013 22:05:47.410484  517273 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-226873/.minikube/key.pem, removing ...
	I1013 22:05:47.410492  517273 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-226873/.minikube/key.pem
	I1013 22:05:47.410533  517273 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-226873/.minikube/key.pem (1679 bytes)
	I1013 22:05:47.410628  517273 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-226873/.minikube/ca.pem, removing ...
	I1013 22:05:47.410635  517273 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-226873/.minikube/ca.pem
	I1013 22:05:47.410671  517273 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-226873/.minikube/ca.pem (1078 bytes)
	I1013 22:05:47.410773  517273 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-226873/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca-key.pem org=jenkins.custom-flannel-200102 san=[127.0.0.1 192.168.76.2 custom-flannel-200102 localhost minikube]
	I1013 22:05:47.677831  517273 provision.go:177] copyRemoteCerts
	I1013 22:05:47.677894  517273 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1013 22:05:47.677941  517273 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-200102
	I1013 22:05:47.696394  517273 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/custom-flannel-200102/id_rsa Username:docker}
	I1013 22:05:47.801395  517273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1013 22:05:47.823351  517273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1013 22:05:47.844626  517273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1013 22:05:47.866088  517273 provision.go:87] duration metric: took 477.009651ms to configureAuth
	I1013 22:05:47.866118  517273 ubuntu.go:206] setting minikube options for container-runtime
	I1013 22:05:47.866320  517273 config.go:182] Loaded profile config "custom-flannel-200102": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:05:47.866465  517273 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-200102
	I1013 22:05:47.889194  517273 main.go:141] libmachine: Using SSH client type: native
	I1013 22:05:47.889481  517273 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1013 22:05:47.889505  517273 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1013 22:05:48.158847  517273 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1013 22:05:48.158883  517273 machine.go:96] duration metric: took 1.304471658s to provisionDockerMachine
	I1013 22:05:48.158896  517273 client.go:171] duration metric: took 7.353543831s to LocalClient.Create
	I1013 22:05:48.158921  517273 start.go:167] duration metric: took 7.353612609s to libmachine.API.Create "custom-flannel-200102"
	I1013 22:05:48.158935  517273 start.go:293] postStartSetup for "custom-flannel-200102" (driver="docker")
	I1013 22:05:48.158953  517273 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1013 22:05:48.159073  517273 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1013 22:05:48.159132  517273 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-200102
	I1013 22:05:48.178109  517273 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/custom-flannel-200102/id_rsa Username:docker}
	I1013 22:05:48.283534  517273 ssh_runner.go:195] Run: cat /etc/os-release
	I1013 22:05:48.288206  517273 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1013 22:05:48.288240  517273 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1013 22:05:48.288257  517273 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-226873/.minikube/addons for local assets ...
	I1013 22:05:48.288321  517273 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-226873/.minikube/files for local assets ...
	I1013 22:05:48.288418  517273 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-226873/.minikube/files/etc/ssl/certs/2309292.pem -> 2309292.pem in /etc/ssl/certs
	I1013 22:05:48.288542  517273 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1013 22:05:48.297633  517273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/files/etc/ssl/certs/2309292.pem --> /etc/ssl/certs/2309292.pem (1708 bytes)
	I1013 22:05:48.321645  517273 start.go:296] duration metric: took 162.690099ms for postStartSetup
	I1013 22:05:48.322119  517273 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-200102
	I1013 22:05:48.341595  517273 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/custom-flannel-200102/config.json ...
	I1013 22:05:48.341920  517273 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1013 22:05:48.341983  517273 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-200102
	I1013 22:05:48.361854  517273 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/custom-flannel-200102/id_rsa Username:docker}
	I1013 22:05:48.462899  517273 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1013 22:05:48.468644  517273 start.go:128] duration metric: took 7.665808291s to createHost
	I1013 22:05:48.468673  517273 start.go:83] releasing machines lock for "custom-flannel-200102", held for 7.665932087s
	I1013 22:05:48.468749  517273 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-200102
	I1013 22:05:48.489810  517273 ssh_runner.go:195] Run: cat /version.json
	I1013 22:05:48.489864  517273 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1013 22:05:48.489964  517273 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-200102
	I1013 22:05:48.489866  517273 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-200102
	I1013 22:05:48.512416  517273 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/custom-flannel-200102/id_rsa Username:docker}
	I1013 22:05:48.512749  517273 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/custom-flannel-200102/id_rsa Username:docker}
	I1013 22:05:48.687792  517273 ssh_runner.go:195] Run: systemctl --version
	I1013 22:05:48.696525  517273 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1013 22:05:48.752102  517273 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1013 22:05:48.759945  517273 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1013 22:05:48.760063  517273 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1013 22:05:48.797340  517273 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1013 22:05:48.797364  517273 start.go:495] detecting cgroup driver to use...
	I1013 22:05:48.797409  517273 detect.go:190] detected "systemd" cgroup driver on host os
	I1013 22:05:48.797465  517273 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1013 22:05:48.821148  517273 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1013 22:05:48.839314  517273 docker.go:218] disabling cri-docker service (if available) ...
	I1013 22:05:48.839375  517273 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1013 22:05:48.863630  517273 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1013 22:05:48.888738  517273 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1013 22:05:49.011692  517273 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1013 22:05:49.140495  517273 docker.go:234] disabling docker service ...
	I1013 22:05:49.140555  517273 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1013 22:05:49.166570  517273 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1013 22:05:49.184764  517273 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1013 22:05:49.311921  517273 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1013 22:05:49.435394  517273 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1013 22:05:49.454513  517273 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1013 22:05:49.474066  517273 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1013 22:05:49.474240  517273 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:05:49.490385  517273 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1013 22:05:49.490446  517273 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:05:49.504495  517273 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:05:49.517646  517273 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:05:49.529740  517273 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1013 22:05:49.544171  517273 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:05:49.559507  517273 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:05:49.580203  517273 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:05:49.594811  517273 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1013 22:05:49.605968  517273 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1013 22:05:49.617660  517273 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:05:49.743416  517273 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1013 22:05:50.199919  517273 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1013 22:05:50.200132  517273 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1013 22:05:50.204855  517273 start.go:563] Will wait 60s for crictl version
	I1013 22:05:50.204916  517273 ssh_runner.go:195] Run: which crictl
	I1013 22:05:50.209131  517273 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1013 22:05:50.239042  517273 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1013 22:05:50.239141  517273 ssh_runner.go:195] Run: crio --version
	I1013 22:05:50.282258  517273 ssh_runner.go:195] Run: crio --version
	I1013 22:05:50.318282  517273 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1013 22:05:50.320116  517273 cli_runner.go:164] Run: docker network inspect custom-flannel-200102 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 22:05:50.340369  517273 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1013 22:05:50.346583  517273 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 22:05:50.360693  517273 kubeadm.go:883] updating cluster {Name:custom-flannel-200102 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-200102 Namespace:default APIServerHAVIP: APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreD
NSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1013 22:05:50.360842  517273 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 22:05:50.360914  517273 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 22:05:50.397089  517273 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 22:05:50.397112  517273 crio.go:433] Images already preloaded, skipping extraction
	I1013 22:05:50.397157  517273 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 22:05:50.424458  517273 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 22:05:50.424481  517273 cache_images.go:85] Images are preloaded, skipping loading
	I1013 22:05:50.424489  517273 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1013 22:05:50.424572  517273 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=custom-flannel-200102 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-200102 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml}
	I1013 22:05:50.424635  517273 ssh_runner.go:195] Run: crio config
	I1013 22:05:50.478925  517273 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1013 22:05:50.478975  517273 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1013 22:05:50.479025  517273 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:custom-flannel-200102 NodeName:custom-flannel-200102 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1013 22:05:50.479184  517273 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "custom-flannel-200102"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1013 22:05:50.479256  517273 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1013 22:05:50.494053  517273 binaries.go:44] Found k8s binaries, skipping transfer
	I1013 22:05:50.494138  517273 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1013 22:05:50.503711  517273 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (371 bytes)
	I1013 22:05:50.518270  517273 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1013 22:05:50.535755  517273 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I1013 22:05:50.550495  517273 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1013 22:05:50.554826  517273 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 22:05:50.566100  517273 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	
	
	==> CRI-O <==
	Oct 13 22:05:16 embed-certs-521669 crio[555]: time="2025-10-13T22:05:16.569780388Z" level=info msg="Started container" PID=1720 containerID=5e0d87998e93d93e30f8d61432686e1e7fea323a52c7d2bc44b17f89cd4b7422 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lshp4/dashboard-metrics-scraper id=9885fa41-89e7-4d24-bf16-ab5eb734a489 name=/runtime.v1.RuntimeService/StartContainer sandboxID=400cb80372776e6c2e43382331e16ac1fd10c9c4b54d438bd7c69a5ae81ded52
	Oct 13 22:05:17 embed-certs-521669 crio[555]: time="2025-10-13T22:05:17.469124565Z" level=info msg="Removing container: 950fb4a15963a3ab99f0025ebd28bf2bade24b1ad6dee6ee02bd84e293d854df" id=e2202320-1cb5-4894-9f78-26235905ad5b name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 13 22:05:17 embed-certs-521669 crio[555]: time="2025-10-13T22:05:17.480187393Z" level=info msg="Removed container 950fb4a15963a3ab99f0025ebd28bf2bade24b1ad6dee6ee02bd84e293d854df: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lshp4/dashboard-metrics-scraper" id=e2202320-1cb5-4894-9f78-26235905ad5b name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 13 22:05:29 embed-certs-521669 crio[555]: time="2025-10-13T22:05:29.507506928Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=8459d21f-3232-45bb-a424-f32874b55697 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:05:29 embed-certs-521669 crio[555]: time="2025-10-13T22:05:29.510304983Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=0c41b1f6-5e0a-45bf-9251-8df9e2e1b1d0 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:05:29 embed-certs-521669 crio[555]: time="2025-10-13T22:05:29.5115904Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=d77f5060-d897-4ad7-b194-429b0d14fd44 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:05:29 embed-certs-521669 crio[555]: time="2025-10-13T22:05:29.511936083Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:05:29 embed-certs-521669 crio[555]: time="2025-10-13T22:05:29.517711751Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:05:29 embed-certs-521669 crio[555]: time="2025-10-13T22:05:29.517985597Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/0be378103d9ac840144298d89187309b5a5dd2b00ad7191be6b507a71ab32500/merged/etc/passwd: no such file or directory"
	Oct 13 22:05:29 embed-certs-521669 crio[555]: time="2025-10-13T22:05:29.518048598Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/0be378103d9ac840144298d89187309b5a5dd2b00ad7191be6b507a71ab32500/merged/etc/group: no such file or directory"
	Oct 13 22:05:29 embed-certs-521669 crio[555]: time="2025-10-13T22:05:29.518398996Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:05:29 embed-certs-521669 crio[555]: time="2025-10-13T22:05:29.55936445Z" level=info msg="Created container f8588d53d142a704b6a3313145b02df3eb18b2272fb5de5e687eadb80a950b3b: kube-system/storage-provisioner/storage-provisioner" id=d77f5060-d897-4ad7-b194-429b0d14fd44 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:05:29 embed-certs-521669 crio[555]: time="2025-10-13T22:05:29.560208038Z" level=info msg="Starting container: f8588d53d142a704b6a3313145b02df3eb18b2272fb5de5e687eadb80a950b3b" id=b1591a0d-8be3-498c-91d5-75d458d84e17 name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 22:05:29 embed-certs-521669 crio[555]: time="2025-10-13T22:05:29.562802623Z" level=info msg="Started container" PID=1734 containerID=f8588d53d142a704b6a3313145b02df3eb18b2272fb5de5e687eadb80a950b3b description=kube-system/storage-provisioner/storage-provisioner id=b1591a0d-8be3-498c-91d5-75d458d84e17 name=/runtime.v1.RuntimeService/StartContainer sandboxID=530784bfeb10b575dc95daa5849904a87e0d13bfd19b0dc5966d8432dc59fb09
	Oct 13 22:05:41 embed-certs-521669 crio[555]: time="2025-10-13T22:05:41.353751031Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=878c6167-cbec-4b68-821f-27047d53df70 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:05:41 embed-certs-521669 crio[555]: time="2025-10-13T22:05:41.357464857Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=5df9cf01-b414-41f0-9306-895a33b05a0f name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:05:41 embed-certs-521669 crio[555]: time="2025-10-13T22:05:41.359480976Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lshp4/dashboard-metrics-scraper" id=1e40a084-3d99-4bc9-977c-b77af1c1b392 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:05:41 embed-certs-521669 crio[555]: time="2025-10-13T22:05:41.359808558Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:05:41 embed-certs-521669 crio[555]: time="2025-10-13T22:05:41.370579361Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:05:41 embed-certs-521669 crio[555]: time="2025-10-13T22:05:41.371349767Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:05:41 embed-certs-521669 crio[555]: time="2025-10-13T22:05:41.417154022Z" level=info msg="Created container 32890e23034691fcd8995f2c2f36cdf5b876b33ba6b110ee02ffd7a8a5b1b672: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lshp4/dashboard-metrics-scraper" id=1e40a084-3d99-4bc9-977c-b77af1c1b392 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:05:41 embed-certs-521669 crio[555]: time="2025-10-13T22:05:41.418364269Z" level=info msg="Starting container: 32890e23034691fcd8995f2c2f36cdf5b876b33ba6b110ee02ffd7a8a5b1b672" id=cfac5671-4829-4a23-918d-217715679026 name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 22:05:41 embed-certs-521669 crio[555]: time="2025-10-13T22:05:41.423151675Z" level=info msg="Started container" PID=1770 containerID=32890e23034691fcd8995f2c2f36cdf5b876b33ba6b110ee02ffd7a8a5b1b672 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lshp4/dashboard-metrics-scraper id=cfac5671-4829-4a23-918d-217715679026 name=/runtime.v1.RuntimeService/StartContainer sandboxID=400cb80372776e6c2e43382331e16ac1fd10c9c4b54d438bd7c69a5ae81ded52
	Oct 13 22:05:41 embed-certs-521669 crio[555]: time="2025-10-13T22:05:41.550283548Z" level=info msg="Removing container: 5e0d87998e93d93e30f8d61432686e1e7fea323a52c7d2bc44b17f89cd4b7422" id=d687e991-6c2f-4fad-9f90-2945abe5438d name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 13 22:05:41 embed-certs-521669 crio[555]: time="2025-10-13T22:05:41.568434165Z" level=info msg="Removed container 5e0d87998e93d93e30f8d61432686e1e7fea323a52c7d2bc44b17f89cd4b7422: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lshp4/dashboard-metrics-scraper" id=d687e991-6c2f-4fad-9f90-2945abe5438d name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	32890e2303469       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           11 seconds ago      Exited              dashboard-metrics-scraper   3                   400cb80372776       dashboard-metrics-scraper-6ffb444bf9-lshp4   kubernetes-dashboard
	f8588d53d142a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           23 seconds ago      Running             storage-provisioner         1                   530784bfeb10b       storage-provisioner                          kube-system
	ddc954a1f166b       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   42 seconds ago      Running             kubernetes-dashboard        0                   0b1791a03dfe8       kubernetes-dashboard-855c9754f9-69m9v        kubernetes-dashboard
	7007fa2f7855e       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           53 seconds ago      Running             busybox                     1                   f4804271464b4       busybox                                      default
	f1a2082cf98ad       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           54 seconds ago      Running             coredns                     0                   3db830775684e       coredns-66bc5c9577-kzq9t                     kube-system
	1f49063ffccfd       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           54 seconds ago      Running             kube-proxy                  0                   5916c60b3452f       kube-proxy-jjzrs                             kube-system
	942193e0f8e22       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           54 seconds ago      Running             kindnet-cni                 0                   c231839291fe4       kindnet-rqr6b                                kube-system
	51119e820cd1b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           54 seconds ago      Exited              storage-provisioner         0                   530784bfeb10b       storage-provisioner                          kube-system
	fdd62b2d9b12e       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           56 seconds ago      Running             kube-scheduler              0                   4e435bf260645       kube-scheduler-embed-certs-521669            kube-system
	dd6ca47e50d2c       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           56 seconds ago      Running             kube-apiserver              0                   afa10fac5e453       kube-apiserver-embed-certs-521669            kube-system
	aee0c2d478b28       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           56 seconds ago      Running             kube-controller-manager     0                   72a739170411e       kube-controller-manager-embed-certs-521669   kube-system
	9380e5f9e72fa       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           56 seconds ago      Running             etcd                        0                   0af436d2ebec3       etcd-embed-certs-521669                      kube-system
	
	
	==> coredns [f1a2082cf98ada2575c55be51a887e685d88ce434c06f68f0414e5a4d53bbaba] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39693 - 59632 "HINFO IN 6179617301028422114.1954051897100138550. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.062655421s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-521669
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-521669
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=18273cc699fc357fbf4f93654efe4966698a9f22
	                    minikube.k8s.io/name=embed-certs-521669
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_13T22_03_33_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 13 Oct 2025 22:03:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-521669
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 13 Oct 2025 22:05:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 13 Oct 2025 22:05:28 +0000   Mon, 13 Oct 2025 22:03:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 13 Oct 2025 22:05:28 +0000   Mon, 13 Oct 2025 22:03:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 13 Oct 2025 22:05:28 +0000   Mon, 13 Oct 2025 22:03:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 13 Oct 2025 22:05:28 +0000   Mon, 13 Oct 2025 22:04:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    embed-certs-521669
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863444Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863444Ki
	  pods:               110
	System Info:
	  Machine ID:                 66f4c9907daa2bafbf78246f68ed04ba
	  System UUID:                3d04c77e-97c4-4463-b7c6-6837fef5c3d8
	  Boot ID:                    981b04c4-08c1-4321-af44-0fa889f5f1d8
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 coredns-66bc5c9577-kzq9t                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     2m14s
	  kube-system                 etcd-embed-certs-521669                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m20s
	  kube-system                 kindnet-rqr6b                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2m14s
	  kube-system                 kube-apiserver-embed-certs-521669             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m21s
	  kube-system                 kube-controller-manager-embed-certs-521669    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m20s
	  kube-system                 kube-proxy-jjzrs                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m14s
	  kube-system                 kube-scheduler-embed-certs-521669             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m20s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m13s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-lshp4    0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-69m9v         0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m13s                  kube-proxy       
	  Normal  Starting                 53s                    kube-proxy       
	  Normal  Starting                 2m24s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m24s (x8 over 2m24s)  kubelet          Node embed-certs-521669 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m24s (x8 over 2m24s)  kubelet          Node embed-certs-521669 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m24s (x8 over 2m24s)  kubelet          Node embed-certs-521669 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m20s                  kubelet          Node embed-certs-521669 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m20s                  kubelet          Node embed-certs-521669 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m20s                  kubelet          Node embed-certs-521669 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m20s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           2m15s                  node-controller  Node embed-certs-521669 event: Registered Node embed-certs-521669 in Controller
	  Normal  NodeReady                93s                    kubelet          Node embed-certs-521669 status is now: NodeReady
	  Normal  Starting                 57s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  57s (x8 over 57s)      kubelet          Node embed-certs-521669 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    57s (x8 over 57s)      kubelet          Node embed-certs-521669 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     57s (x8 over 57s)      kubelet          Node embed-certs-521669 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           51s                    node-controller  Node embed-certs-521669 event: Registered Node embed-certs-521669 in Controller
	
	
	==> dmesg <==
	[  +0.099627] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.027042] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.612816] kauditd_printk_skb: 47 callbacks suppressed
	[Oct13 21:21] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +1.026013] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +1.023870] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +1.023931] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +1.023844] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +1.023883] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +2.047734] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +4.031606] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +8.511203] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[ +16.382178] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[Oct13 21:22] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	
	
	==> etcd [9380e5f9e72fadb5e073fb6200b1804c022f9df9694c1163e541594da8527714] <==
	{"level":"warn","ts":"2025-10-13T22:05:02.406366Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"309.714387ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/replicasets/kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9\" limit:1 ","response":"range_response_count:1 size:2973"}
	{"level":"info","ts":"2025-10-13T22:05:02.406447Z","caller":"traceutil/trace.go:172","msg":"trace[1678593396] range","detail":"{range_begin:/registry/replicasets/kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9; range_end:; response_count:1; response_revision:536; }","duration":"309.807995ms","start":"2025-10-13T22:05:02.096623Z","end":"2025-10-13T22:05:02.406431Z","steps":["trace[1678593396] 'agreement among raft nodes before linearized reading'  (duration: 43.777322ms)","trace[1678593396] 'range keys from in-memory index tree'  (duration: 265.896602ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-13T22:05:02.406486Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-13T22:05:02.096608Z","time spent":"309.863844ms","remote":"127.0.0.1:37350","response type":"/etcdserverpb.KV/Range","request count":0,"request size":83,"response count":1,"response size":2996,"request content":"key:\"/registry/replicasets/kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9\" limit:1 "}
	{"level":"warn","ts":"2025-10-13T22:05:02.407147Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"266.06116ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873789312747368227 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kubernetes-dashboard/kubernetes-dashboard-855c9754f9-69m9v\" mod_revision:530 > success:<request_put:<key:\"/registry/pods/kubernetes-dashboard/kubernetes-dashboard-855c9754f9-69m9v\" value_size:2754 >> failure:<request_range:<key:\"/registry/pods/kubernetes-dashboard/kubernetes-dashboard-855c9754f9-69m9v\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-10-13T22:05:02.407434Z","caller":"traceutil/trace.go:172","msg":"trace[1627572079] transaction","detail":"{read_only:false; response_revision:541; number_of_response:1; }","duration":"311.70827ms","start":"2025-10-13T22:05:02.095715Z","end":"2025-10-13T22:05:02.407423Z","steps":["trace[1627572079] 'process raft request'  (duration: 311.683576ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-13T22:05:02.407497Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-13T22:05:02.095698Z","time spent":"311.768276ms","remote":"127.0.0.1:36522","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":783,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/kubernetes-dashboard/kubernetes-dashboard-855c9754f9.186e2c330af945f6\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kubernetes-dashboard/kubernetes-dashboard-855c9754f9.186e2c330af945f6\" value_size:679 lease:4650417275892592377 >> failure:<>"}
	{"level":"info","ts":"2025-10-13T22:05:02.407659Z","caller":"traceutil/trace.go:172","msg":"trace[364711714] transaction","detail":"{read_only:false; response_revision:537; number_of_response:1; }","duration":"317.786405ms","start":"2025-10-13T22:05:02.089860Z","end":"2025-10-13T22:05:02.407646Z","steps":["trace[364711714] 'process raft request'  (duration: 50.604021ms)","trace[364711714] 'compare'  (duration: 265.968288ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-13T22:05:02.407729Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-13T22:05:02.089797Z","time spent":"317.887112ms","remote":"127.0.0.1:36744","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":2835,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kubernetes-dashboard/kubernetes-dashboard-855c9754f9-69m9v\" mod_revision:530 > success:<request_put:<key:\"/registry/pods/kubernetes-dashboard/kubernetes-dashboard-855c9754f9-69m9v\" value_size:2754 >> failure:<request_range:<key:\"/registry/pods/kubernetes-dashboard/kubernetes-dashboard-855c9754f9-69m9v\" > >"}
	{"level":"info","ts":"2025-10-13T22:05:02.407844Z","caller":"traceutil/trace.go:172","msg":"trace[1935186855] linearizableReadLoop","detail":"{readStateIndex:569; appliedIndex:568; }","duration":"267.457417ms","start":"2025-10-13T22:05:02.140377Z","end":"2025-10-13T22:05:02.407835Z","steps":["trace[1935186855] 'read index received'  (duration: 143.857856ms)","trace[1935186855] 'applied index is now lower than readState.Index'  (duration: 123.59875ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-13T22:05:02.407978Z","caller":"traceutil/trace.go:172","msg":"trace[1576442207] transaction","detail":"{read_only:false; response_revision:538; number_of_response:1; }","duration":"316.786478ms","start":"2025-10-13T22:05:02.091169Z","end":"2025-10-13T22:05:02.407955Z","steps":["trace[1576442207] 'process raft request'  (duration: 316.068508ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-13T22:05:02.408049Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-13T22:05:02.091151Z","time spent":"316.863631ms","remote":"127.0.0.1:37350","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3078,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/replicasets/kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" mod_revision:523 > success:<request_put:<key:\"/registry/replicasets/kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" value_size:2996 >> failure:<request_range:<key:\"/registry/replicasets/kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" > >"}
	{"level":"info","ts":"2025-10-13T22:05:02.408238Z","caller":"traceutil/trace.go:172","msg":"trace[88820641] transaction","detail":"{read_only:false; response_revision:539; number_of_response:1; }","duration":"316.581514ms","start":"2025-10-13T22:05:02.091647Z","end":"2025-10-13T22:05:02.408228Z","steps":["trace[88820641] 'process raft request'  (duration: 315.677827ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-13T22:05:02.408288Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-13T22:05:02.091631Z","time spent":"316.630449ms","remote":"127.0.0.1:37296","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4879,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/deployments/kubernetes-dashboard/kubernetes-dashboard\" mod_revision:527 > success:<request_put:<key:\"/registry/deployments/kubernetes-dashboard/kubernetes-dashboard\" value_size:4808 >> failure:<request_range:<key:\"/registry/deployments/kubernetes-dashboard/kubernetes-dashboard\" > >"}
	{"level":"info","ts":"2025-10-13T22:05:02.408405Z","caller":"traceutil/trace.go:172","msg":"trace[1654464183] transaction","detail":"{read_only:false; response_revision:540; number_of_response:1; }","duration":"312.949921ms","start":"2025-10-13T22:05:02.095446Z","end":"2025-10-13T22:05:02.408396Z","steps":["trace[1654464183] 'process raft request'  (duration: 311.922127ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-13T22:05:02.408455Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-13T22:05:02.095428Z","time spent":"312.997591ms","remote":"127.0.0.1:37434","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":835,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lshp4.186e2c330b50b7c5\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lshp4.186e2c330b50b7c5\" value_size:720 lease:4650417275892592176 >> failure:<>"}
	{"level":"warn","ts":"2025-10-13T22:05:02.408641Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"183.337108ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lshp4\" limit:1 ","response":"range_response_count:1 size:2782"}
	{"level":"info","ts":"2025-10-13T22:05:02.408694Z","caller":"traceutil/trace.go:172","msg":"trace[249669695] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lshp4; range_end:; response_count:1; response_revision:541; }","duration":"183.388688ms","start":"2025-10-13T22:05:02.225288Z","end":"2025-10-13T22:05:02.408676Z","steps":["trace[249669695] 'agreement among raft nodes before linearized reading'  (duration: 183.257567ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-13T22:05:02.408795Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"301.069554ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-kzq9t\" limit:1 ","response":"range_response_count:1 size:5936"}
	{"level":"info","ts":"2025-10-13T22:05:02.408830Z","caller":"traceutil/trace.go:172","msg":"trace[445184127] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-kzq9t; range_end:; response_count:1; response_revision:541; }","duration":"301.110323ms","start":"2025-10-13T22:05:02.107711Z","end":"2025-10-13T22:05:02.408821Z","steps":["trace[445184127] 'agreement among raft nodes before linearized reading'  (duration: 300.984114ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-13T22:05:02.408852Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-13T22:05:02.107693Z","time spent":"301.152528ms","remote":"127.0.0.1:36744","response type":"/etcdserverpb.KV/Range","request count":0,"request size":55,"response count":1,"response size":5959,"request content":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-kzq9t\" limit:1 "}
	{"level":"info","ts":"2025-10-13T22:05:02.654031Z","caller":"traceutil/trace.go:172","msg":"trace[140611527] transaction","detail":"{read_only:false; response_revision:546; number_of_response:1; }","duration":"175.710479ms","start":"2025-10-13T22:05:02.478298Z","end":"2025-10-13T22:05:02.654008Z","steps":["trace[140611527] 'process raft request'  (duration: 137.097226ms)","trace[140611527] 'compare'  (duration: 38.358038ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-13T22:05:02.654040Z","caller":"traceutil/trace.go:172","msg":"trace[1763462367] transaction","detail":"{read_only:false; response_revision:547; number_of_response:1; }","duration":"175.579411ms","start":"2025-10-13T22:05:02.478448Z","end":"2025-10-13T22:05:02.654027Z","steps":["trace[1763462367] 'process raft request'  (duration: 175.44056ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-13T22:05:02.654100Z","caller":"traceutil/trace.go:172","msg":"trace[1929973869] transaction","detail":"{read_only:false; response_revision:548; number_of_response:1; }","duration":"174.033511ms","start":"2025-10-13T22:05:02.480059Z","end":"2025-10-13T22:05:02.654093Z","steps":["trace[1929973869] 'process raft request'  (duration: 173.901782ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-13T22:05:03.644234Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"127.717222ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873789312747368259 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/embed-certs-521669.186e2c317923066d\" mod_revision:551 > success:<request_put:<key:\"/registry/events/default/embed-certs-521669.186e2c317923066d\" value_size:630 lease:4650417275892592377 >> failure:<request_range:<key:\"/registry/events/default/embed-certs-521669.186e2c317923066d\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-10-13T22:05:03.644369Z","caller":"traceutil/trace.go:172","msg":"trace[1300980893] transaction","detail":"{read_only:false; response_revision:554; number_of_response:1; }","duration":"253.505623ms","start":"2025-10-13T22:05:03.390833Z","end":"2025-10-13T22:05:03.644339Z","steps":["trace[1300980893] 'process raft request'  (duration: 125.604794ms)","trace[1300980893] 'compare'  (duration: 127.594746ms)"],"step_count":2}
	
	
	==> kernel <==
	 22:05:52 up  1:48,  0 user,  load average: 6.02, 4.36, 5.87
	Linux embed-certs-521669 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [942193e0f8e228dbe430e60585172509fea39415b4683743cc8575fdd693853a] <==
	I1013 22:04:58.970223       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1013 22:04:58.970527       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1013 22:04:58.970721       1 main.go:148] setting mtu 1500 for CNI 
	I1013 22:04:58.970736       1 main.go:178] kindnetd IP family: "ipv4"
	I1013 22:04:58.970767       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-13T22:04:59Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1013 22:04:59.273696       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1013 22:04:59.273777       1 controller.go:381] "Waiting for informer caches to sync"
	I1013 22:04:59.273791       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1013 22:04:59.273924       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1013 22:04:59.378287       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1013 22:04:59.469740       1 metrics.go:72] Registering metrics
	I1013 22:04:59.469870       1 controller.go:711] "Syncing nftables rules"
	I1013 22:05:09.180167       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1013 22:05:09.180249       1 main.go:301] handling current node
	I1013 22:05:19.180134       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1013 22:05:19.180188       1 main.go:301] handling current node
	I1013 22:05:29.180768       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1013 22:05:29.180808       1 main.go:301] handling current node
	I1013 22:05:39.182146       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1013 22:05:39.182181       1 main.go:301] handling current node
	I1013 22:05:49.181086       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1013 22:05:49.181129       1 main.go:301] handling current node
	
	
	==> kube-apiserver [dd6ca47e50d2cbd68431e1f5ab00c476734d1abae5ea035e8079056054b006bb] <==
	I1013 22:04:57.983493       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1013 22:04:57.984043       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1013 22:04:57.984133       1 aggregator.go:171] initial CRD sync complete...
	I1013 22:04:57.984144       1 autoregister_controller.go:144] Starting autoregister controller
	I1013 22:04:57.984150       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1013 22:04:57.984163       1 cache.go:39] Caches are synced for autoregister controller
	I1013 22:04:57.984429       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1013 22:04:57.985435       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1013 22:04:57.985497       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1013 22:04:57.991106       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1013 22:04:57.995230       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1013 22:04:58.010368       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1013 22:04:58.010468       1 policy_source.go:240] refreshing policies
	I1013 22:04:58.038285       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1013 22:04:58.301695       1 controller.go:667] quota admission added evaluator for: namespaces
	I1013 22:04:58.345268       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1013 22:04:58.348303       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1013 22:04:58.391535       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1013 22:04:58.401394       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1013 22:04:58.457215       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.109.255"}
	I1013 22:04:58.468282       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.97.190.76"}
	I1013 22:04:58.887659       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1013 22:05:01.407136       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1013 22:05:01.704035       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1013 22:05:01.867428       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [aee0c2d478b28876e3d4fc00fe5f4d69ca458ac596bdc766a2e18070947e0fc8] <==
	I1013 22:05:01.300378       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1013 22:05:01.300461       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1013 22:05:01.300469       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1013 22:05:01.300577       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1013 22:05:01.300726       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1013 22:05:01.301214       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1013 22:05:01.301315       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1013 22:05:01.303971       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1013 22:05:01.304014       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1013 22:05:01.307335       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 22:05:01.307355       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1013 22:05:01.307360       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1013 22:05:01.307367       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1013 22:05:01.307388       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1013 22:05:01.307395       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1013 22:05:01.307622       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1013 22:05:01.307769       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1013 22:05:01.310372       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1013 22:05:01.313404       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1013 22:05:01.313503       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1013 22:05:01.313588       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-521669"
	I1013 22:05:01.313635       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1013 22:05:01.317750       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1013 22:05:01.320619       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 22:05:01.321717       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	
	
	==> kube-proxy [1f49063ffccfd9f6190201e8082032d4920f99c8dc4110db28267978196f15df] <==
	I1013 22:04:58.811429       1 server_linux.go:53] "Using iptables proxy"
	I1013 22:04:58.888371       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1013 22:04:58.988571       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1013 22:04:58.988614       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1013 22:04:58.988739       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1013 22:04:59.015185       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1013 22:04:59.015248       1 server_linux.go:132] "Using iptables Proxier"
	I1013 22:04:59.022602       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1013 22:04:59.023499       1 server.go:527] "Version info" version="v1.34.1"
	I1013 22:04:59.023528       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 22:04:59.025011       1 config.go:200] "Starting service config controller"
	I1013 22:04:59.025037       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1013 22:04:59.025507       1 config.go:106] "Starting endpoint slice config controller"
	I1013 22:04:59.025515       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1013 22:04:59.025532       1 config.go:403] "Starting serviceCIDR config controller"
	I1013 22:04:59.025537       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1013 22:04:59.025556       1 config.go:309] "Starting node config controller"
	I1013 22:04:59.025561       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1013 22:04:59.126070       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1013 22:04:59.126104       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1013 22:04:59.126108       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1013 22:04:59.126122       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [fdd62b2d9b12e7b64a03352f0d267662da3aa571a99ec9ecfb273dbe33b29f29] <==
	I1013 22:04:56.614100       1 serving.go:386] Generated self-signed cert in-memory
	I1013 22:04:57.977441       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1013 22:04:57.977473       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 22:04:57.984138       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1013 22:04:57.984853       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1013 22:04:57.984724       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1013 22:04:57.984786       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1013 22:04:57.985811       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1013 22:04:57.984759       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1013 22:04:57.984774       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 22:04:57.993916       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 22:04:58.085902       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1013 22:04:58.086871       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1013 22:04:58.095107       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 13 22:05:05 embed-certs-521669 kubelet[708]: I1013 22:05:05.732242     708 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 13 22:05:06 embed-certs-521669 kubelet[708]: I1013 22:05:06.424880     708 scope.go:117] "RemoveContainer" containerID="f6ace8ed9a4db5040519fc603026fd19c129da7f02e847c0ec69e722c6721eb5"
	Oct 13 22:05:07 embed-certs-521669 kubelet[708]: I1013 22:05:07.430960     708 scope.go:117] "RemoveContainer" containerID="f6ace8ed9a4db5040519fc603026fd19c129da7f02e847c0ec69e722c6721eb5"
	Oct 13 22:05:07 embed-certs-521669 kubelet[708]: I1013 22:05:07.431137     708 scope.go:117] "RemoveContainer" containerID="950fb4a15963a3ab99f0025ebd28bf2bade24b1ad6dee6ee02bd84e293d854df"
	Oct 13 22:05:07 embed-certs-521669 kubelet[708]: E1013 22:05:07.431348     708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-lshp4_kubernetes-dashboard(1ba2bc2f-7557-44ac-b8f8-e7cd2e592ad5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lshp4" podUID="1ba2bc2f-7557-44ac-b8f8-e7cd2e592ad5"
	Oct 13 22:05:08 embed-certs-521669 kubelet[708]: I1013 22:05:08.436384     708 scope.go:117] "RemoveContainer" containerID="950fb4a15963a3ab99f0025ebd28bf2bade24b1ad6dee6ee02bd84e293d854df"
	Oct 13 22:05:08 embed-certs-521669 kubelet[708]: E1013 22:05:08.436593     708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-lshp4_kubernetes-dashboard(1ba2bc2f-7557-44ac-b8f8-e7cd2e592ad5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lshp4" podUID="1ba2bc2f-7557-44ac-b8f8-e7cd2e592ad5"
	Oct 13 22:05:10 embed-certs-521669 kubelet[708]: I1013 22:05:10.455692     708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-69m9v" podStartSLOduration=1.651877776 podStartE2EDuration="8.455666969s" podCreationTimestamp="2025-10-13 22:05:02 +0000 UTC" firstStartedPulling="2025-10-13 22:05:02.962578188 +0000 UTC m=+7.732734785" lastFinishedPulling="2025-10-13 22:05:09.766367392 +0000 UTC m=+14.536523978" observedRunningTime="2025-10-13 22:05:10.455244647 +0000 UTC m=+15.225401252" watchObservedRunningTime="2025-10-13 22:05:10.455666969 +0000 UTC m=+15.225823573"
	Oct 13 22:05:16 embed-certs-521669 kubelet[708]: I1013 22:05:16.515225     708 scope.go:117] "RemoveContainer" containerID="950fb4a15963a3ab99f0025ebd28bf2bade24b1ad6dee6ee02bd84e293d854df"
	Oct 13 22:05:17 embed-certs-521669 kubelet[708]: I1013 22:05:17.467652     708 scope.go:117] "RemoveContainer" containerID="950fb4a15963a3ab99f0025ebd28bf2bade24b1ad6dee6ee02bd84e293d854df"
	Oct 13 22:05:17 embed-certs-521669 kubelet[708]: I1013 22:05:17.467903     708 scope.go:117] "RemoveContainer" containerID="5e0d87998e93d93e30f8d61432686e1e7fea323a52c7d2bc44b17f89cd4b7422"
	Oct 13 22:05:17 embed-certs-521669 kubelet[708]: E1013 22:05:17.468143     708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-lshp4_kubernetes-dashboard(1ba2bc2f-7557-44ac-b8f8-e7cd2e592ad5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lshp4" podUID="1ba2bc2f-7557-44ac-b8f8-e7cd2e592ad5"
	Oct 13 22:05:26 embed-certs-521669 kubelet[708]: I1013 22:05:26.515279     708 scope.go:117] "RemoveContainer" containerID="5e0d87998e93d93e30f8d61432686e1e7fea323a52c7d2bc44b17f89cd4b7422"
	Oct 13 22:05:26 embed-certs-521669 kubelet[708]: E1013 22:05:26.515499     708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-lshp4_kubernetes-dashboard(1ba2bc2f-7557-44ac-b8f8-e7cd2e592ad5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lshp4" podUID="1ba2bc2f-7557-44ac-b8f8-e7cd2e592ad5"
	Oct 13 22:05:29 embed-certs-521669 kubelet[708]: I1013 22:05:29.506794     708 scope.go:117] "RemoveContainer" containerID="51119e820cd1b0834228a2770ec00edf3d21ca637bc49ffae945718586b6a219"
	Oct 13 22:05:41 embed-certs-521669 kubelet[708]: I1013 22:05:41.352689     708 scope.go:117] "RemoveContainer" containerID="5e0d87998e93d93e30f8d61432686e1e7fea323a52c7d2bc44b17f89cd4b7422"
	Oct 13 22:05:41 embed-certs-521669 kubelet[708]: I1013 22:05:41.547669     708 scope.go:117] "RemoveContainer" containerID="5e0d87998e93d93e30f8d61432686e1e7fea323a52c7d2bc44b17f89cd4b7422"
	Oct 13 22:05:41 embed-certs-521669 kubelet[708]: I1013 22:05:41.548124     708 scope.go:117] "RemoveContainer" containerID="32890e23034691fcd8995f2c2f36cdf5b876b33ba6b110ee02ffd7a8a5b1b672"
	Oct 13 22:05:41 embed-certs-521669 kubelet[708]: E1013 22:05:41.548336     708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-lshp4_kubernetes-dashboard(1ba2bc2f-7557-44ac-b8f8-e7cd2e592ad5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lshp4" podUID="1ba2bc2f-7557-44ac-b8f8-e7cd2e592ad5"
	Oct 13 22:05:46 embed-certs-521669 kubelet[708]: I1013 22:05:46.514937     708 scope.go:117] "RemoveContainer" containerID="32890e23034691fcd8995f2c2f36cdf5b876b33ba6b110ee02ffd7a8a5b1b672"
	Oct 13 22:05:46 embed-certs-521669 kubelet[708]: E1013 22:05:46.515156     708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-lshp4_kubernetes-dashboard(1ba2bc2f-7557-44ac-b8f8-e7cd2e592ad5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lshp4" podUID="1ba2bc2f-7557-44ac-b8f8-e7cd2e592ad5"
	Oct 13 22:05:49 embed-certs-521669 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 13 22:05:50 embed-certs-521669 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 13 22:05:50 embed-certs-521669 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 13 22:05:50 embed-certs-521669 systemd[1]: kubelet.service: Consumed 1.879s CPU time.
	
	
	==> kubernetes-dashboard [ddc954a1f166be754e4eb7e65b3e26d4f213b366dfcb0dee4876ade24670515c] <==
	2025/10/13 22:05:09 Using namespace: kubernetes-dashboard
	2025/10/13 22:05:09 Using in-cluster config to connect to apiserver
	2025/10/13 22:05:09 Using secret token for csrf signing
	2025/10/13 22:05:09 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/13 22:05:09 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/13 22:05:09 Successful initial request to the apiserver, version: v1.34.1
	2025/10/13 22:05:09 Generating JWE encryption key
	2025/10/13 22:05:09 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/13 22:05:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/13 22:05:09 Initializing JWE encryption key from synchronized object
	2025/10/13 22:05:09 Creating in-cluster Sidecar client
	2025/10/13 22:05:09 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/13 22:05:09 Serving insecurely on HTTP port: 9090
	2025/10/13 22:05:39 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/13 22:05:09 Starting overwatch
	
	
	==> storage-provisioner [51119e820cd1b0834228a2770ec00edf3d21ca637bc49ffae945718586b6a219] <==
	I1013 22:04:58.741754       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1013 22:05:28.744982       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [f8588d53d142a704b6a3313145b02df3eb18b2272fb5de5e687eadb80a950b3b] <==
	I1013 22:05:29.578440       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1013 22:05:29.587545       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1013 22:05:29.587591       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1013 22:05:29.590893       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:05:33.047638       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:05:37.308446       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:05:40.907672       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:05:43.961498       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:05:46.984358       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:05:46.989520       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1013 22:05:46.989662       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1013 22:05:46.989835       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-521669_163a44f7-0f6d-47c4-96d5-b31e6d0299aa!
	I1013 22:05:46.989816       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2713828c-d71d-46c7-8af8-1b55a2cb8cd7", APIVersion:"v1", ResourceVersion:"675", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-521669_163a44f7-0f6d-47c4-96d5-b31e6d0299aa became leader
	W1013 22:05:46.993445       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:05:47.000968       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1013 22:05:47.090404       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-521669_163a44f7-0f6d-47c4-96d5-b31e6d0299aa!
	W1013 22:05:49.004781       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:05:49.010689       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:05:51.013986       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:05:51.018624       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:05:53.022309       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:05:53.026614       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-521669 -n embed-certs-521669
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-521669 -n embed-certs-521669: exit status 2 (355.615152ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-521669 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-521669
helpers_test.go:243: (dbg) docker inspect embed-certs-521669:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1baa373eead7e68f171d088aace3c344d1736cf1920b8b49bced7115bf5dc203",
	        "Created": "2025-10-13T22:03:15.556123483Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 505502,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-13T22:04:48.791654166Z",
	            "FinishedAt": "2025-10-13T22:04:47.294015675Z"
	        },
	        "Image": "sha256:496f866fde942b6f85dc3d23428f27ed11a5cd69b522133c43b8dc97e7575c9e",
	        "ResolvConfPath": "/var/lib/docker/containers/1baa373eead7e68f171d088aace3c344d1736cf1920b8b49bced7115bf5dc203/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1baa373eead7e68f171d088aace3c344d1736cf1920b8b49bced7115bf5dc203/hostname",
	        "HostsPath": "/var/lib/docker/containers/1baa373eead7e68f171d088aace3c344d1736cf1920b8b49bced7115bf5dc203/hosts",
	        "LogPath": "/var/lib/docker/containers/1baa373eead7e68f171d088aace3c344d1736cf1920b8b49bced7115bf5dc203/1baa373eead7e68f171d088aace3c344d1736cf1920b8b49bced7115bf5dc203-json.log",
	        "Name": "/embed-certs-521669",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-521669:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-521669",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1baa373eead7e68f171d088aace3c344d1736cf1920b8b49bced7115bf5dc203",
	                "LowerDir": "/var/lib/docker/overlay2/3a20280ab14381960ae7156d30bd7b2fa35423fe9a356df896c104f200bd64da-init/diff:/var/lib/docker/overlay2/d6236b573fee274727d414fe7dfb0718c2f1a4b8ebed995b4196b3231a8d31a4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3a20280ab14381960ae7156d30bd7b2fa35423fe9a356df896c104f200bd64da/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3a20280ab14381960ae7156d30bd7b2fa35423fe9a356df896c104f200bd64da/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3a20280ab14381960ae7156d30bd7b2fa35423fe9a356df896c104f200bd64da/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-521669",
	                "Source": "/var/lib/docker/volumes/embed-certs-521669/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-521669",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-521669",
	                "name.minikube.sigs.k8s.io": "embed-certs-521669",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c12db32de2341b9b4526a9ee42b76d0b6bc3e0e6bd3e6518554950e96b3a3617",
	            "SandboxKey": "/var/run/docker/netns/c12db32de234",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33108"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33109"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33112"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33110"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33111"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-521669": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5a:9c:e8:76:0a:ec",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "50800b9f1c9d1d3bc768e42eef173bae32c640bbf4383e5f2ce56c38ad7a7349",
	                    "EndpointID": "628ab4c06aaf3f28d03a4474ae3f2dfbb8611624f650fa20553ff43dea62afe3",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-521669",
	                        "1baa373eead7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-521669 -n embed-certs-521669
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-521669 -n embed-certs-521669: exit status 2 (350.233284ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-521669 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-521669 logs -n 25: (1.244694383s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                ARGS                                                                                │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ default-k8s-diff-port-505851 image list --format=json                                                                                                              │ default-k8s-diff-port-505851 │ jenkins │ v1.37.0 │ 13 Oct 25 22:05 UTC │ 13 Oct 25 22:05 UTC │
	│ pause   │ -p default-k8s-diff-port-505851 --alsologtostderr -v=1                                                                                                             │ default-k8s-diff-port-505851 │ jenkins │ v1.37.0 │ 13 Oct 25 22:05 UTC │                     │
	│ ssh     │ -p kindnet-200102 pgrep -a kubelet                                                                                                                                 │ kindnet-200102               │ jenkins │ v1.37.0 │ 13 Oct 25 22:05 UTC │ 13 Oct 25 22:05 UTC │
	│ delete  │ -p default-k8s-diff-port-505851                                                                                                                                    │ default-k8s-diff-port-505851 │ jenkins │ v1.37.0 │ 13 Oct 25 22:05 UTC │ 13 Oct 25 22:05 UTC │
	│ delete  │ -p default-k8s-diff-port-505851                                                                                                                                    │ default-k8s-diff-port-505851 │ jenkins │ v1.37.0 │ 13 Oct 25 22:05 UTC │ 13 Oct 25 22:05 UTC │
	│ start   │ -p custom-flannel-200102 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio │ custom-flannel-200102        │ jenkins │ v1.37.0 │ 13 Oct 25 22:05 UTC │                     │
	│ ssh     │ -p kindnet-200102 sudo cat /etc/nsswitch.conf                                                                                                                      │ kindnet-200102               │ jenkins │ v1.37.0 │ 13 Oct 25 22:05 UTC │ 13 Oct 25 22:05 UTC │
	│ ssh     │ -p kindnet-200102 sudo cat /etc/hosts                                                                                                                              │ kindnet-200102               │ jenkins │ v1.37.0 │ 13 Oct 25 22:05 UTC │ 13 Oct 25 22:05 UTC │
	│ ssh     │ -p kindnet-200102 sudo cat /etc/resolv.conf                                                                                                                        │ kindnet-200102               │ jenkins │ v1.37.0 │ 13 Oct 25 22:05 UTC │ 13 Oct 25 22:05 UTC │
	│ ssh     │ -p kindnet-200102 sudo crictl pods                                                                                                                                 │ kindnet-200102               │ jenkins │ v1.37.0 │ 13 Oct 25 22:05 UTC │ 13 Oct 25 22:05 UTC │
	│ ssh     │ -p kindnet-200102 sudo crictl ps --all                                                                                                                             │ kindnet-200102               │ jenkins │ v1.37.0 │ 13 Oct 25 22:05 UTC │ 13 Oct 25 22:05 UTC │
	│ image   │ embed-certs-521669 image list --format=json                                                                                                                        │ embed-certs-521669           │ jenkins │ v1.37.0 │ 13 Oct 25 22:05 UTC │ 13 Oct 25 22:05 UTC │
	│ pause   │ -p embed-certs-521669 --alsologtostderr -v=1                                                                                                                       │ embed-certs-521669           │ jenkins │ v1.37.0 │ 13 Oct 25 22:05 UTC │                     │
	│ ssh     │ -p kindnet-200102 sudo find /etc/cni -type f -exec sh -c 'echo {}; cat {}' \;                                                                                      │ kindnet-200102               │ jenkins │ v1.37.0 │ 13 Oct 25 22:05 UTC │ 13 Oct 25 22:05 UTC │
	│ ssh     │ -p kindnet-200102 sudo ip a s                                                                                                                                      │ kindnet-200102               │ jenkins │ v1.37.0 │ 13 Oct 25 22:05 UTC │ 13 Oct 25 22:05 UTC │
	│ ssh     │ -p kindnet-200102 sudo ip r s                                                                                                                                      │ kindnet-200102               │ jenkins │ v1.37.0 │ 13 Oct 25 22:05 UTC │ 13 Oct 25 22:05 UTC │
	│ ssh     │ -p kindnet-200102 sudo iptables-save                                                                                                                               │ kindnet-200102               │ jenkins │ v1.37.0 │ 13 Oct 25 22:05 UTC │ 13 Oct 25 22:05 UTC │
	│ ssh     │ -p kindnet-200102 sudo iptables -t nat -L -n -v                                                                                                                    │ kindnet-200102               │ jenkins │ v1.37.0 │ 13 Oct 25 22:05 UTC │ 13 Oct 25 22:05 UTC │
	│ ssh     │ -p kindnet-200102 sudo systemctl status kubelet --all --full --no-pager                                                                                            │ kindnet-200102               │ jenkins │ v1.37.0 │ 13 Oct 25 22:05 UTC │ 13 Oct 25 22:05 UTC │
	│ ssh     │ -p kindnet-200102 sudo systemctl cat kubelet --no-pager                                                                                                            │ kindnet-200102               │ jenkins │ v1.37.0 │ 13 Oct 25 22:05 UTC │ 13 Oct 25 22:05 UTC │
	│ ssh     │ -p kindnet-200102 sudo journalctl -xeu kubelet --all --full --no-pager                                                                                             │ kindnet-200102               │ jenkins │ v1.37.0 │ 13 Oct 25 22:05 UTC │ 13 Oct 25 22:05 UTC │
	│ ssh     │ -p kindnet-200102 sudo cat /etc/kubernetes/kubelet.conf                                                                                                            │ kindnet-200102               │ jenkins │ v1.37.0 │ 13 Oct 25 22:05 UTC │ 13 Oct 25 22:05 UTC │
	│ ssh     │ -p kindnet-200102 sudo cat /var/lib/kubelet/config.yaml                                                                                                            │ kindnet-200102               │ jenkins │ v1.37.0 │ 13 Oct 25 22:05 UTC │ 13 Oct 25 22:05 UTC │
	│ ssh     │ -p kindnet-200102 sudo systemctl status docker --all --full --no-pager                                                                                             │ kindnet-200102               │ jenkins │ v1.37.0 │ 13 Oct 25 22:05 UTC │                     │
	│ ssh     │ -p kindnet-200102 sudo systemctl cat docker --no-pager                                                                                                             │ kindnet-200102               │ jenkins │ v1.37.0 │ 13 Oct 25 22:05 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 22:05:40
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 22:05:40.610020  517273 out.go:360] Setting OutFile to fd 1 ...
	I1013 22:05:40.610190  517273 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:05:40.610199  517273 out.go:374] Setting ErrFile to fd 2...
	I1013 22:05:40.610203  517273 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:05:40.610437  517273 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-226873/.minikube/bin
	I1013 22:05:40.610958  517273 out.go:368] Setting JSON to false
	I1013 22:05:40.612470  517273 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6489,"bootTime":1760386652,"procs":332,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1013 22:05:40.612587  517273 start.go:141] virtualization: kvm guest
	I1013 22:05:40.614884  517273 out.go:179] * [custom-flannel-200102] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1013 22:05:40.616582  517273 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 22:05:40.616616  517273 notify.go:220] Checking for updates...
	I1013 22:05:40.619118  517273 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 22:05:40.620818  517273 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-226873/kubeconfig
	I1013 22:05:40.622041  517273 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-226873/.minikube
	I1013 22:05:40.623310  517273 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1013 22:05:40.624717  517273 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 22:05:40.626497  517273 config.go:182] Loaded profile config "calico-200102": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:05:40.626609  517273 config.go:182] Loaded profile config "embed-certs-521669": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:05:40.626700  517273 config.go:182] Loaded profile config "kindnet-200102": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:05:40.626817  517273 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 22:05:40.650970  517273 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1013 22:05:40.651085  517273 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 22:05:40.709561  517273 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-13 22:05:40.699557614 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652166656 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1013 22:05:40.709677  517273 docker.go:318] overlay module found
	I1013 22:05:40.711632  517273 out.go:179] * Using the docker driver based on user configuration
	I1013 22:05:40.713135  517273 start.go:305] selected driver: docker
	I1013 22:05:40.713153  517273 start.go:925] validating driver "docker" against <nil>
	I1013 22:05:40.713164  517273 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 22:05:40.713806  517273 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 22:05:40.771765  517273 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-13 22:05:40.762290223 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652166656 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1013 22:05:40.772009  517273 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1013 22:05:40.772318  517273 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 22:05:40.774402  517273 out.go:179] * Using Docker driver with root privileges
	I1013 22:05:40.775742  517273 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1013 22:05:40.775801  517273 start_flags.go:336] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I1013 22:05:40.775882  517273 start.go:349] cluster config:
	{Name:custom-flannel-200102 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-200102 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 22:05:40.777238  517273 out.go:179] * Starting "custom-flannel-200102" primary control-plane node in "custom-flannel-200102" cluster
	I1013 22:05:40.778453  517273 cache.go:123] Beginning downloading kic base image for docker with crio
	I1013 22:05:40.779700  517273 out.go:179] * Pulling base image v0.0.48-1760363564-21724 ...
	I1013 22:05:40.780911  517273 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 22:05:40.780956  517273 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-226873/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1013 22:05:40.780984  517273 cache.go:58] Caching tarball of preloaded images
	I1013 22:05:40.781043  517273 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon
	I1013 22:05:40.781127  517273 preload.go:233] Found /home/jenkins/minikube-integration/21724-226873/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1013 22:05:40.781144  517273 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1013 22:05:40.781270  517273 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/custom-flannel-200102/config.json ...
	I1013 22:05:40.781295  517273 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/custom-flannel-200102/config.json: {Name:mk07a72dbdb2ec66cf7c88827d8cab605e23d904 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:05:40.802555  517273 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon, skipping pull
	I1013 22:05:40.802579  517273 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 exists in daemon, skipping load
	I1013 22:05:40.802595  517273 cache.go:232] Successfully downloaded all kic artifacts
	I1013 22:05:40.802619  517273 start.go:360] acquireMachinesLock for custom-flannel-200102: {Name:mkcd003ae0d506525f7ece13c5a148a7bc023af9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 22:05:40.802729  517273 start.go:364] duration metric: took 79.219µs to acquireMachinesLock for "custom-flannel-200102"
	I1013 22:05:40.802754  517273 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-200102 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-200102 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1013 22:05:40.802816  517273 start.go:125] createHost starting for "" (driver="docker")
	I1013 22:05:38.783437  510068 system_pods.go:86] 9 kube-system pods found
	I1013 22:05:38.783480  510068 system_pods.go:89] "calico-kube-controllers-59556d9b4c-kvkr8" [73c85800-ccdd-4d93-bbe7-3a214d9c23e7] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1013 22:05:38.783492  510068 system_pods.go:89] "calico-node-r6ts6" [04357e44-6783-45c3-8951-e76ac35971d5] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1013 22:05:38.783502  510068 system_pods.go:89] "coredns-66bc5c9577-6bk7g" [d4902451-b2ff-4e5e-9a1c-1c832aada996] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 22:05:38.783507  510068 system_pods.go:89] "etcd-calico-200102" [2b4c3f03-0ecf-45c9-b6d2-9b4c4d6099f2] Running
	I1013 22:05:38.783514  510068 system_pods.go:89] "kube-apiserver-calico-200102" [95748a10-7102-4dee-97c1-478d48736094] Running
	I1013 22:05:38.783519  510068 system_pods.go:89] "kube-controller-manager-calico-200102" [59284bf2-36f8-413e-921e-c2a55aaf4885] Running
	I1013 22:05:38.783524  510068 system_pods.go:89] "kube-proxy-ggd54" [d80e6296-eb8d-429d-8fe5-c44b12c06329] Running
	I1013 22:05:38.783529  510068 system_pods.go:89] "kube-scheduler-calico-200102" [30cf2518-f590-4214-ae04-db8be6dff43f] Running
	I1013 22:05:38.783534  510068 system_pods.go:89] "storage-provisioner" [775a2fbe-c5b3-4080-8645-298635b852a3] Running
	I1013 22:05:38.783553  510068 retry.go:31] will retry after 1.784343518s: missing components: kube-dns
	I1013 22:05:40.573104  510068 system_pods.go:86] 9 kube-system pods found
	I1013 22:05:40.573136  510068 system_pods.go:89] "calico-kube-controllers-59556d9b4c-kvkr8" [73c85800-ccdd-4d93-bbe7-3a214d9c23e7] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1013 22:05:40.573144  510068 system_pods.go:89] "calico-node-r6ts6" [04357e44-6783-45c3-8951-e76ac35971d5] Pending / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1013 22:05:40.573151  510068 system_pods.go:89] "coredns-66bc5c9577-6bk7g" [d4902451-b2ff-4e5e-9a1c-1c832aada996] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 22:05:40.573155  510068 system_pods.go:89] "etcd-calico-200102" [2b4c3f03-0ecf-45c9-b6d2-9b4c4d6099f2] Running
	I1013 22:05:40.573160  510068 system_pods.go:89] "kube-apiserver-calico-200102" [95748a10-7102-4dee-97c1-478d48736094] Running
	I1013 22:05:40.573163  510068 system_pods.go:89] "kube-controller-manager-calico-200102" [59284bf2-36f8-413e-921e-c2a55aaf4885] Running
	I1013 22:05:40.573167  510068 system_pods.go:89] "kube-proxy-ggd54" [d80e6296-eb8d-429d-8fe5-c44b12c06329] Running
	I1013 22:05:40.573170  510068 system_pods.go:89] "kube-scheduler-calico-200102" [30cf2518-f590-4214-ae04-db8be6dff43f] Running
	I1013 22:05:40.573173  510068 system_pods.go:89] "storage-provisioner" [775a2fbe-c5b3-4080-8645-298635b852a3] Running
	I1013 22:05:40.573188  510068 retry.go:31] will retry after 1.675380625s: missing components: kube-dns
	I1013 22:05:42.254693  510068 system_pods.go:86] 9 kube-system pods found
	I1013 22:05:42.254754  510068 system_pods.go:89] "calico-kube-controllers-59556d9b4c-kvkr8" [73c85800-ccdd-4d93-bbe7-3a214d9c23e7] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1013 22:05:42.254768  510068 system_pods.go:89] "calico-node-r6ts6" [04357e44-6783-45c3-8951-e76ac35971d5] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1013 22:05:42.254792  510068 system_pods.go:89] "coredns-66bc5c9577-6bk7g" [d4902451-b2ff-4e5e-9a1c-1c832aada996] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 22:05:42.254801  510068 system_pods.go:89] "etcd-calico-200102" [2b4c3f03-0ecf-45c9-b6d2-9b4c4d6099f2] Running
	I1013 22:05:42.254811  510068 system_pods.go:89] "kube-apiserver-calico-200102" [95748a10-7102-4dee-97c1-478d48736094] Running
	I1013 22:05:42.254816  510068 system_pods.go:89] "kube-controller-manager-calico-200102" [59284bf2-36f8-413e-921e-c2a55aaf4885] Running
	I1013 22:05:42.254825  510068 system_pods.go:89] "kube-proxy-ggd54" [d80e6296-eb8d-429d-8fe5-c44b12c06329] Running
	I1013 22:05:42.254831  510068 system_pods.go:89] "kube-scheduler-calico-200102" [30cf2518-f590-4214-ae04-db8be6dff43f] Running
	I1013 22:05:42.254841  510068 system_pods.go:89] "storage-provisioner" [775a2fbe-c5b3-4080-8645-298635b852a3] Running
	I1013 22:05:42.254862  510068 retry.go:31] will retry after 2.7450669s: missing components: kube-dns
	I1013 22:05:40.805051  517273 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1013 22:05:40.805308  517273 start.go:159] libmachine.API.Create for "custom-flannel-200102" (driver="docker")
	I1013 22:05:40.805345  517273 client.go:168] LocalClient.Create starting
	I1013 22:05:40.805416  517273 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem
	I1013 22:05:40.805461  517273 main.go:141] libmachine: Decoding PEM data...
	I1013 22:05:40.805488  517273 main.go:141] libmachine: Parsing certificate...
	I1013 22:05:40.805579  517273 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21724-226873/.minikube/certs/cert.pem
	I1013 22:05:40.805612  517273 main.go:141] libmachine: Decoding PEM data...
	I1013 22:05:40.805627  517273 main.go:141] libmachine: Parsing certificate...
	I1013 22:05:40.806069  517273 cli_runner.go:164] Run: docker network inspect custom-flannel-200102 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1013 22:05:40.824133  517273 cli_runner.go:211] docker network inspect custom-flannel-200102 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1013 22:05:40.824216  517273 network_create.go:284] running [docker network inspect custom-flannel-200102] to gather additional debugging logs...
	I1013 22:05:40.824245  517273 cli_runner.go:164] Run: docker network inspect custom-flannel-200102
	W1013 22:05:40.841516  517273 cli_runner.go:211] docker network inspect custom-flannel-200102 returned with exit code 1
	I1013 22:05:40.841548  517273 network_create.go:287] error running [docker network inspect custom-flannel-200102]: docker network inspect custom-flannel-200102: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network custom-flannel-200102 not found
	I1013 22:05:40.841579  517273 network_create.go:289] output of [docker network inspect custom-flannel-200102]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network custom-flannel-200102 not found
	
	** /stderr **
	I1013 22:05:40.841787  517273 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 22:05:40.862232  517273 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7d83a8e6a805 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ea:69:47:54:f9:98} reservation:<nil>}
	I1013 22:05:40.863102  517273 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-35c0cecee577 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:f2:41:bc:f8:12:32} reservation:<nil>}
	I1013 22:05:40.863888  517273 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-2e951fbeb08e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ba:fb:be:51:da:97} reservation:<nil>}
	I1013 22:05:40.864702  517273 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ec07a0}
	I1013 22:05:40.864733  517273 network_create.go:124] attempt to create docker network custom-flannel-200102 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1013 22:05:40.864799  517273 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-200102 custom-flannel-200102
	I1013 22:05:40.931054  517273 network_create.go:108] docker network custom-flannel-200102 192.168.76.0/24 created
	I1013 22:05:40.931089  517273 kic.go:121] calculated static IP "192.168.76.2" for the "custom-flannel-200102" container
	I1013 22:05:40.931173  517273 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1013 22:05:40.951010  517273 cli_runner.go:164] Run: docker volume create custom-flannel-200102 --label name.minikube.sigs.k8s.io=custom-flannel-200102 --label created_by.minikube.sigs.k8s.io=true
	I1013 22:05:40.969487  517273 oci.go:103] Successfully created a docker volume custom-flannel-200102
	I1013 22:05:40.969591  517273 cli_runner.go:164] Run: docker run --rm --name custom-flannel-200102-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-200102 --entrypoint /usr/bin/test -v custom-flannel-200102:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -d /var/lib
	I1013 22:05:41.491036  517273 oci.go:107] Successfully prepared a docker volume custom-flannel-200102
	I1013 22:05:41.491102  517273 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 22:05:41.491128  517273 kic.go:194] Starting extracting preloaded images to volume ...
	I1013 22:05:41.491195  517273 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21724-226873/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v custom-flannel-200102:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -I lz4 -xf /preloaded.tar -C /extractDir
	I1013 22:05:45.005141  510068 system_pods.go:86] 9 kube-system pods found
	I1013 22:05:45.005185  510068 system_pods.go:89] "calico-kube-controllers-59556d9b4c-kvkr8" [73c85800-ccdd-4d93-bbe7-3a214d9c23e7] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1013 22:05:45.005198  510068 system_pods.go:89] "calico-node-r6ts6" [04357e44-6783-45c3-8951-e76ac35971d5] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1013 22:05:45.005221  510068 system_pods.go:89] "coredns-66bc5c9577-6bk7g" [d4902451-b2ff-4e5e-9a1c-1c832aada996] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 22:05:45.005230  510068 system_pods.go:89] "etcd-calico-200102" [2b4c3f03-0ecf-45c9-b6d2-9b4c4d6099f2] Running
	I1013 22:05:45.005237  510068 system_pods.go:89] "kube-apiserver-calico-200102" [95748a10-7102-4dee-97c1-478d48736094] Running
	I1013 22:05:45.005244  510068 system_pods.go:89] "kube-controller-manager-calico-200102" [59284bf2-36f8-413e-921e-c2a55aaf4885] Running
	I1013 22:05:45.005249  510068 system_pods.go:89] "kube-proxy-ggd54" [d80e6296-eb8d-429d-8fe5-c44b12c06329] Running
	I1013 22:05:45.005258  510068 system_pods.go:89] "kube-scheduler-calico-200102" [30cf2518-f590-4214-ae04-db8be6dff43f] Running
	I1013 22:05:45.005267  510068 system_pods.go:89] "storage-provisioner" [775a2fbe-c5b3-4080-8645-298635b852a3] Running
	I1013 22:05:45.005302  510068 retry.go:31] will retry after 3.44311488s: missing components: kube-dns
	I1013 22:05:48.452986  510068 system_pods.go:86] 9 kube-system pods found
	I1013 22:05:48.453043  510068 system_pods.go:89] "calico-kube-controllers-59556d9b4c-kvkr8" [73c85800-ccdd-4d93-bbe7-3a214d9c23e7] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1013 22:05:48.453055  510068 system_pods.go:89] "calico-node-r6ts6" [04357e44-6783-45c3-8951-e76ac35971d5] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1013 22:05:48.453064  510068 system_pods.go:89] "coredns-66bc5c9577-6bk7g" [d4902451-b2ff-4e5e-9a1c-1c832aada996] Running
	I1013 22:05:48.453070  510068 system_pods.go:89] "etcd-calico-200102" [2b4c3f03-0ecf-45c9-b6d2-9b4c4d6099f2] Running
	I1013 22:05:48.453076  510068 system_pods.go:89] "kube-apiserver-calico-200102" [95748a10-7102-4dee-97c1-478d48736094] Running
	I1013 22:05:48.453082  510068 system_pods.go:89] "kube-controller-manager-calico-200102" [59284bf2-36f8-413e-921e-c2a55aaf4885] Running
	I1013 22:05:48.453089  510068 system_pods.go:89] "kube-proxy-ggd54" [d80e6296-eb8d-429d-8fe5-c44b12c06329] Running
	I1013 22:05:48.453097  510068 system_pods.go:89] "kube-scheduler-calico-200102" [30cf2518-f590-4214-ae04-db8be6dff43f] Running
	I1013 22:05:48.453101  510068 system_pods.go:89] "storage-provisioner" [775a2fbe-c5b3-4080-8645-298635b852a3] Running
	I1013 22:05:48.453110  510068 system_pods.go:126] duration metric: took 14.990653671s to wait for k8s-apps to be running ...
	I1013 22:05:48.453121  510068 system_svc.go:44] waiting for kubelet service to be running ....
	I1013 22:05:48.453169  510068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 22:05:48.468292  510068 system_svc.go:56] duration metric: took 15.159035ms WaitForService to wait for kubelet
	I1013 22:05:48.468323  510068 kubeadm.go:586] duration metric: took 19.871791057s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 22:05:48.468354  510068 node_conditions.go:102] verifying NodePressure condition ...
	I1013 22:05:48.472146  510068 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1013 22:05:48.472178  510068 node_conditions.go:123] node cpu capacity is 8
	I1013 22:05:48.472194  510068 node_conditions.go:105] duration metric: took 3.834382ms to run NodePressure ...
	I1013 22:05:48.472210  510068 start.go:241] waiting for startup goroutines ...
	I1013 22:05:48.472218  510068 start.go:246] waiting for cluster config update ...
	I1013 22:05:48.472231  510068 start.go:255] writing updated cluster config ...
	I1013 22:05:48.472550  510068 ssh_runner.go:195] Run: rm -f paused
	I1013 22:05:48.477490  510068 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 22:05:48.482020  510068 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-6bk7g" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:05:48.487231  510068 pod_ready.go:94] pod "coredns-66bc5c9577-6bk7g" is "Ready"
	I1013 22:05:48.487259  510068 pod_ready.go:86] duration metric: took 5.210749ms for pod "coredns-66bc5c9577-6bk7g" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:05:48.489611  510068 pod_ready.go:83] waiting for pod "etcd-calico-200102" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:05:48.494309  510068 pod_ready.go:94] pod "etcd-calico-200102" is "Ready"
	I1013 22:05:48.494335  510068 pod_ready.go:86] duration metric: took 4.687947ms for pod "etcd-calico-200102" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:05:48.496591  510068 pod_ready.go:83] waiting for pod "kube-apiserver-calico-200102" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:05:48.501003  510068 pod_ready.go:94] pod "kube-apiserver-calico-200102" is "Ready"
	I1013 22:05:48.501031  510068 pod_ready.go:86] duration metric: took 4.413264ms for pod "kube-apiserver-calico-200102" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:05:48.503134  510068 pod_ready.go:83] waiting for pod "kube-controller-manager-calico-200102" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:05:48.884101  510068 pod_ready.go:94] pod "kube-controller-manager-calico-200102" is "Ready"
	I1013 22:05:48.884136  510068 pod_ready.go:86] duration metric: took 380.982445ms for pod "kube-controller-manager-calico-200102" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:05:49.083279  510068 pod_ready.go:83] waiting for pod "kube-proxy-ggd54" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:05:49.483122  510068 pod_ready.go:94] pod "kube-proxy-ggd54" is "Ready"
	I1013 22:05:49.483153  510068 pod_ready.go:86] duration metric: took 399.845578ms for pod "kube-proxy-ggd54" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:05:49.684490  510068 pod_ready.go:83] waiting for pod "kube-scheduler-calico-200102" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:05:50.083026  510068 pod_ready.go:94] pod "kube-scheduler-calico-200102" is "Ready"
	I1013 22:05:50.083062  510068 pod_ready.go:86] duration metric: took 398.540722ms for pod "kube-scheduler-calico-200102" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:05:50.083085  510068 pod_ready.go:40] duration metric: took 1.605554156s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 22:05:50.143776  510068 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1013 22:05:50.145669  510068 out.go:179] * Done! kubectl is now configured to use "calico-200102" cluster and "default" namespace by default
	I1013 22:05:46.070036  517273 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21724-226873/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v custom-flannel-200102:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -I lz4 -xf /preloaded.tar -C /extractDir: (4.57877893s)
	I1013 22:05:46.070066  517273 kic.go:203] duration metric: took 4.578935394s to extract preloaded images to volume ...
	W1013 22:05:46.070166  517273 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1013 22:05:46.070196  517273 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1013 22:05:46.070236  517273 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1013 22:05:46.126795  517273 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname custom-flannel-200102 --name custom-flannel-200102 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-200102 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=custom-flannel-200102 --network custom-flannel-200102 --ip 192.168.76.2 --volume custom-flannel-200102:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225
	I1013 22:05:46.409060  517273 cli_runner.go:164] Run: docker container inspect custom-flannel-200102 --format={{.State.Running}}
	I1013 22:05:46.429128  517273 cli_runner.go:164] Run: docker container inspect custom-flannel-200102 --format={{.State.Status}}
	I1013 22:05:46.448596  517273 cli_runner.go:164] Run: docker exec custom-flannel-200102 stat /var/lib/dpkg/alternatives/iptables
	I1013 22:05:46.498290  517273 oci.go:144] the created container "custom-flannel-200102" has a running status.
	I1013 22:05:46.498321  517273 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21724-226873/.minikube/machines/custom-flannel-200102/id_rsa...
	I1013 22:05:46.728370  517273 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21724-226873/.minikube/machines/custom-flannel-200102/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1013 22:05:46.761531  517273 cli_runner.go:164] Run: docker container inspect custom-flannel-200102 --format={{.State.Status}}
	I1013 22:05:46.785351  517273 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1013 22:05:46.785382  517273 kic_runner.go:114] Args: [docker exec --privileged custom-flannel-200102 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1013 22:05:46.833458  517273 cli_runner.go:164] Run: docker container inspect custom-flannel-200102 --format={{.State.Status}}
	I1013 22:05:46.854376  517273 machine.go:93] provisionDockerMachine start ...
	I1013 22:05:46.854515  517273 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-200102
	I1013 22:05:46.875571  517273 main.go:141] libmachine: Using SSH client type: native
	I1013 22:05:46.875929  517273 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1013 22:05:46.875951  517273 main.go:141] libmachine: About to run SSH command:
	hostname
	I1013 22:05:47.041655  517273 main.go:141] libmachine: SSH cmd err, output: <nil>: custom-flannel-200102
	
	I1013 22:05:47.041694  517273 ubuntu.go:182] provisioning hostname "custom-flannel-200102"
	I1013 22:05:47.041767  517273 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-200102
	I1013 22:05:47.062135  517273 main.go:141] libmachine: Using SSH client type: native
	I1013 22:05:47.063452  517273 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1013 22:05:47.063538  517273 main.go:141] libmachine: About to run SSH command:
	sudo hostname custom-flannel-200102 && echo "custom-flannel-200102" | sudo tee /etc/hostname
	I1013 22:05:47.223239  517273 main.go:141] libmachine: SSH cmd err, output: <nil>: custom-flannel-200102
	
	I1013 22:05:47.223321  517273 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-200102
	I1013 22:05:47.243551  517273 main.go:141] libmachine: Using SSH client type: native
	I1013 22:05:47.243936  517273 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1013 22:05:47.243973  517273 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scustom-flannel-200102' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 custom-flannel-200102/g' /etc/hosts;
				else 
					echo '127.0.1.1 custom-flannel-200102' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1013 22:05:47.388934  517273 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 22:05:47.388964  517273 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21724-226873/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-226873/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-226873/.minikube}
	I1013 22:05:47.389050  517273 ubuntu.go:190] setting up certificates
	I1013 22:05:47.389064  517273 provision.go:84] configureAuth start
	I1013 22:05:47.389111  517273 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-200102
	I1013 22:05:47.410174  517273 provision.go:143] copyHostCerts
	I1013 22:05:47.410245  517273 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-226873/.minikube/cert.pem, removing ...
	I1013 22:05:47.410258  517273 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-226873/.minikube/cert.pem
	I1013 22:05:47.410349  517273 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-226873/.minikube/cert.pem (1123 bytes)
	I1013 22:05:47.410484  517273 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-226873/.minikube/key.pem, removing ...
	I1013 22:05:47.410492  517273 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-226873/.minikube/key.pem
	I1013 22:05:47.410533  517273 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-226873/.minikube/key.pem (1679 bytes)
	I1013 22:05:47.410628  517273 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-226873/.minikube/ca.pem, removing ...
	I1013 22:05:47.410635  517273 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-226873/.minikube/ca.pem
	I1013 22:05:47.410671  517273 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-226873/.minikube/ca.pem (1078 bytes)
	I1013 22:05:47.410773  517273 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-226873/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca-key.pem org=jenkins.custom-flannel-200102 san=[127.0.0.1 192.168.76.2 custom-flannel-200102 localhost minikube]
	I1013 22:05:47.677831  517273 provision.go:177] copyRemoteCerts
	I1013 22:05:47.677894  517273 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1013 22:05:47.677941  517273 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-200102
	I1013 22:05:47.696394  517273 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/custom-flannel-200102/id_rsa Username:docker}
	I1013 22:05:47.801395  517273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1013 22:05:47.823351  517273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1013 22:05:47.844626  517273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1013 22:05:47.866088  517273 provision.go:87] duration metric: took 477.009651ms to configureAuth
	I1013 22:05:47.866118  517273 ubuntu.go:206] setting minikube options for container-runtime
	I1013 22:05:47.866320  517273 config.go:182] Loaded profile config "custom-flannel-200102": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:05:47.866465  517273 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-200102
	I1013 22:05:47.889194  517273 main.go:141] libmachine: Using SSH client type: native
	I1013 22:05:47.889481  517273 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1013 22:05:47.889505  517273 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1013 22:05:48.158847  517273 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1013 22:05:48.158883  517273 machine.go:96] duration metric: took 1.304471658s to provisionDockerMachine
	I1013 22:05:48.158896  517273 client.go:171] duration metric: took 7.353543831s to LocalClient.Create
	I1013 22:05:48.158921  517273 start.go:167] duration metric: took 7.353612609s to libmachine.API.Create "custom-flannel-200102"
	I1013 22:05:48.158935  517273 start.go:293] postStartSetup for "custom-flannel-200102" (driver="docker")
	I1013 22:05:48.158953  517273 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1013 22:05:48.159073  517273 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1013 22:05:48.159132  517273 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-200102
	I1013 22:05:48.178109  517273 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/custom-flannel-200102/id_rsa Username:docker}
	I1013 22:05:48.283534  517273 ssh_runner.go:195] Run: cat /etc/os-release
	I1013 22:05:48.288206  517273 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1013 22:05:48.288240  517273 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1013 22:05:48.288257  517273 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-226873/.minikube/addons for local assets ...
	I1013 22:05:48.288321  517273 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-226873/.minikube/files for local assets ...
	I1013 22:05:48.288418  517273 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-226873/.minikube/files/etc/ssl/certs/2309292.pem -> 2309292.pem in /etc/ssl/certs
	I1013 22:05:48.288542  517273 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1013 22:05:48.297633  517273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/files/etc/ssl/certs/2309292.pem --> /etc/ssl/certs/2309292.pem (1708 bytes)
	I1013 22:05:48.321645  517273 start.go:296] duration metric: took 162.690099ms for postStartSetup
	I1013 22:05:48.322119  517273 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-200102
	I1013 22:05:48.341595  517273 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/custom-flannel-200102/config.json ...
	I1013 22:05:48.341920  517273 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1013 22:05:48.341983  517273 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-200102
	I1013 22:05:48.361854  517273 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/custom-flannel-200102/id_rsa Username:docker}
	I1013 22:05:48.462899  517273 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1013 22:05:48.468644  517273 start.go:128] duration metric: took 7.665808291s to createHost
	I1013 22:05:48.468673  517273 start.go:83] releasing machines lock for "custom-flannel-200102", held for 7.665932087s
	I1013 22:05:48.468749  517273 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-200102
	I1013 22:05:48.489810  517273 ssh_runner.go:195] Run: cat /version.json
	I1013 22:05:48.489864  517273 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1013 22:05:48.489964  517273 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-200102
	I1013 22:05:48.489866  517273 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-200102
	I1013 22:05:48.512416  517273 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/custom-flannel-200102/id_rsa Username:docker}
	I1013 22:05:48.512749  517273 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/custom-flannel-200102/id_rsa Username:docker}
	I1013 22:05:48.687792  517273 ssh_runner.go:195] Run: systemctl --version
	I1013 22:05:48.696525  517273 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1013 22:05:48.752102  517273 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1013 22:05:48.759945  517273 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1013 22:05:48.760063  517273 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1013 22:05:48.797340  517273 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1013 22:05:48.797364  517273 start.go:495] detecting cgroup driver to use...
	I1013 22:05:48.797409  517273 detect.go:190] detected "systemd" cgroup driver on host os
	I1013 22:05:48.797465  517273 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1013 22:05:48.821148  517273 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1013 22:05:48.839314  517273 docker.go:218] disabling cri-docker service (if available) ...
	I1013 22:05:48.839375  517273 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1013 22:05:48.863630  517273 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1013 22:05:48.888738  517273 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1013 22:05:49.011692  517273 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1013 22:05:49.140495  517273 docker.go:234] disabling docker service ...
	I1013 22:05:49.140555  517273 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1013 22:05:49.166570  517273 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1013 22:05:49.184764  517273 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1013 22:05:49.311921  517273 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1013 22:05:49.435394  517273 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1013 22:05:49.454513  517273 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1013 22:05:49.474066  517273 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1013 22:05:49.474240  517273 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:05:49.490385  517273 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1013 22:05:49.490446  517273 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:05:49.504495  517273 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:05:49.517646  517273 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:05:49.529740  517273 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1013 22:05:49.544171  517273 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:05:49.559507  517273 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:05:49.580203  517273 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:05:49.594811  517273 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1013 22:05:49.605968  517273 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1013 22:05:49.617660  517273 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:05:49.743416  517273 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1013 22:05:50.199919  517273 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1013 22:05:50.200132  517273 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1013 22:05:50.204855  517273 start.go:563] Will wait 60s for crictl version
	I1013 22:05:50.204916  517273 ssh_runner.go:195] Run: which crictl
	I1013 22:05:50.209131  517273 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1013 22:05:50.239042  517273 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1013 22:05:50.239141  517273 ssh_runner.go:195] Run: crio --version
	I1013 22:05:50.282258  517273 ssh_runner.go:195] Run: crio --version
	I1013 22:05:50.318282  517273 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1013 22:05:50.320116  517273 cli_runner.go:164] Run: docker network inspect custom-flannel-200102 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 22:05:50.340369  517273 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1013 22:05:50.346583  517273 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 22:05:50.360693  517273 kubeadm.go:883] updating cluster {Name:custom-flannel-200102 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-200102 Namespace:default APIServerHAVIP: APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreD
NSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1013 22:05:50.360842  517273 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 22:05:50.360914  517273 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 22:05:50.397089  517273 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 22:05:50.397112  517273 crio.go:433] Images already preloaded, skipping extraction
	I1013 22:05:50.397157  517273 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 22:05:50.424458  517273 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 22:05:50.424481  517273 cache_images.go:85] Images are preloaded, skipping loading
	I1013 22:05:50.424489  517273 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1013 22:05:50.424572  517273 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=custom-flannel-200102 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-200102 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml}
	I1013 22:05:50.424635  517273 ssh_runner.go:195] Run: crio config
	I1013 22:05:50.478925  517273 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1013 22:05:50.478975  517273 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1013 22:05:50.479025  517273 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:custom-flannel-200102 NodeName:custom-flannel-200102 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1013 22:05:50.479184  517273 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "custom-flannel-200102"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1013 22:05:50.479256  517273 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1013 22:05:50.494053  517273 binaries.go:44] Found k8s binaries, skipping transfer
	I1013 22:05:50.494138  517273 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1013 22:05:50.503711  517273 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (371 bytes)
	I1013 22:05:50.518270  517273 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1013 22:05:50.535755  517273 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I1013 22:05:50.550495  517273 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1013 22:05:50.554826  517273 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 22:05:50.566100  517273 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:05:50.673632  517273 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 22:05:50.702415  517273 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/custom-flannel-200102 for IP: 192.168.76.2
	I1013 22:05:50.702442  517273 certs.go:195] generating shared ca certs ...
	I1013 22:05:50.702465  517273 certs.go:227] acquiring lock for ca certs: {Name:mk5abdb742abbab05bc35d961f97579f54806d56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:05:50.702639  517273 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-226873/.minikube/ca.key
	I1013 22:05:50.702715  517273 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-226873/.minikube/proxy-client-ca.key
	I1013 22:05:50.702732  517273 certs.go:257] generating profile certs ...
	I1013 22:05:50.702804  517273 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/custom-flannel-200102/client.key
	I1013 22:05:50.702830  517273 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/custom-flannel-200102/client.crt with IP's: []
	I1013 22:05:50.834862  517273 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/custom-flannel-200102/client.crt ...
	I1013 22:05:50.834904  517273 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/custom-flannel-200102/client.crt: {Name:mk9a061684a79b7a2ea88c8ccb31116d4a8b0f43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:05:50.835184  517273 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/custom-flannel-200102/client.key ...
	I1013 22:05:50.835214  517273 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/custom-flannel-200102/client.key: {Name:mk7781645de8406e3eac1d5b004b13791fd1bd39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:05:50.835343  517273 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/custom-flannel-200102/apiserver.key.e426982d
	I1013 22:05:50.835366  517273 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/custom-flannel-200102/apiserver.crt.e426982d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1013 22:05:51.205797  517273 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/custom-flannel-200102/apiserver.crt.e426982d ...
	I1013 22:05:51.205829  517273 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/custom-flannel-200102/apiserver.crt.e426982d: {Name:mk11ea6e7a3b5fa1c1b9d4da36905e863816f2a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:05:51.206023  517273 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/custom-flannel-200102/apiserver.key.e426982d ...
	I1013 22:05:51.206042  517273 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/custom-flannel-200102/apiserver.key.e426982d: {Name:mk79abc6b2bea8a2f9f5c335af30ac6a8b2ec5e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:05:51.206168  517273 certs.go:382] copying /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/custom-flannel-200102/apiserver.crt.e426982d -> /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/custom-flannel-200102/apiserver.crt
	I1013 22:05:51.206268  517273 certs.go:386] copying /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/custom-flannel-200102/apiserver.key.e426982d -> /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/custom-flannel-200102/apiserver.key
	I1013 22:05:51.206353  517273 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/custom-flannel-200102/proxy-client.key
	I1013 22:05:51.206376  517273 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/custom-flannel-200102/proxy-client.crt with IP's: []
	I1013 22:05:51.854503  517273 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/custom-flannel-200102/proxy-client.crt ...
	I1013 22:05:51.854531  517273 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/custom-flannel-200102/proxy-client.crt: {Name:mk9df6d9f070c1e53a4c08e8301532f5d1aa3f3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:05:51.854730  517273 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/custom-flannel-200102/proxy-client.key ...
	I1013 22:05:51.854749  517273 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/custom-flannel-200102/proxy-client.key: {Name:mkc967b2b036259df2ea31a898b3e0920cb72c77 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:05:51.854933  517273 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/230929.pem (1338 bytes)
	W1013 22:05:51.854972  517273 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-226873/.minikube/certs/230929_empty.pem, impossibly tiny 0 bytes
	I1013 22:05:51.854982  517273 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca-key.pem (1675 bytes)
	I1013 22:05:51.855064  517273 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/ca.pem (1078 bytes)
	I1013 22:05:51.855093  517273 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/cert.pem (1123 bytes)
	I1013 22:05:51.855116  517273 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/certs/key.pem (1679 bytes)
	I1013 22:05:51.855157  517273 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-226873/.minikube/files/etc/ssl/certs/2309292.pem (1708 bytes)
	I1013 22:05:51.855786  517273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1013 22:05:51.881342  517273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1013 22:05:51.915056  517273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1013 22:05:51.941894  517273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1013 22:05:51.965232  517273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/custom-flannel-200102/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1013 22:05:51.990215  517273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/custom-flannel-200102/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1013 22:05:52.012347  517273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/custom-flannel-200102/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1013 22:05:52.035529  517273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/custom-flannel-200102/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1013 22:05:52.056563  517273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1013 22:05:52.079082  517273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/certs/230929.pem --> /usr/share/ca-certificates/230929.pem (1338 bytes)
	I1013 22:05:52.103046  517273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-226873/.minikube/files/etc/ssl/certs/2309292.pem --> /usr/share/ca-certificates/2309292.pem (1708 bytes)
	I1013 22:05:52.131166  517273 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1013 22:05:52.148475  517273 ssh_runner.go:195] Run: openssl version
	I1013 22:05:52.156835  517273 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2309292.pem && ln -fs /usr/share/ca-certificates/2309292.pem /etc/ssl/certs/2309292.pem"
	I1013 22:05:52.167349  517273 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2309292.pem
	I1013 22:05:52.172821  517273 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 13 21:24 /usr/share/ca-certificates/2309292.pem
	I1013 22:05:52.172880  517273 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2309292.pem
	I1013 22:05:52.222973  517273 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2309292.pem /etc/ssl/certs/3ec20f2e.0"
	I1013 22:05:52.232766  517273 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1013 22:05:52.242222  517273 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:05:52.246640  517273 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 13 21:18 /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:05:52.246716  517273 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:05:52.283837  517273 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1013 22:05:52.294099  517273 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/230929.pem && ln -fs /usr/share/ca-certificates/230929.pem /etc/ssl/certs/230929.pem"
	I1013 22:05:52.303404  517273 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/230929.pem
	I1013 22:05:52.307611  517273 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 13 21:24 /usr/share/ca-certificates/230929.pem
	I1013 22:05:52.307670  517273 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/230929.pem
	I1013 22:05:52.349869  517273 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/230929.pem /etc/ssl/certs/51391683.0"
	I1013 22:05:52.361443  517273 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1013 22:05:52.365792  517273 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1013 22:05:52.365854  517273 kubeadm.go:400] StartCluster: {Name:custom-flannel-200102 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-200102 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSL
og:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 22:05:52.365943  517273 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 22:05:52.366064  517273 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 22:05:52.400890  517273 cri.go:89] found id: ""
	I1013 22:05:52.400973  517273 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1013 22:05:52.411134  517273 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1013 22:05:52.421249  517273 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1013 22:05:52.421310  517273 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1013 22:05:52.430676  517273 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1013 22:05:52.430870  517273 kubeadm.go:157] found existing configuration files:
	
	I1013 22:05:52.430942  517273 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1013 22:05:52.442521  517273 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1013 22:05:52.442607  517273 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1013 22:05:52.452242  517273 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1013 22:05:52.462874  517273 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1013 22:05:52.462938  517273 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1013 22:05:52.472501  517273 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1013 22:05:52.483578  517273 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1013 22:05:52.483634  517273 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1013 22:05:52.492933  517273 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1013 22:05:52.502801  517273 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1013 22:05:52.502871  517273 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1013 22:05:52.512836  517273 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1013 22:05:52.557904  517273 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1013 22:05:52.557976  517273 kubeadm.go:318] [preflight] Running pre-flight checks
	I1013 22:05:52.582411  517273 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1013 22:05:52.582507  517273 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1013 22:05:52.582560  517273 kubeadm.go:318] OS: Linux
	I1013 22:05:52.582678  517273 kubeadm.go:318] CGROUPS_CPU: enabled
	I1013 22:05:52.582783  517273 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1013 22:05:52.582866  517273 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1013 22:05:52.582946  517273 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1013 22:05:52.583049  517273 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1013 22:05:52.583135  517273 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1013 22:05:52.583199  517273 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1013 22:05:52.583253  517273 kubeadm.go:318] CGROUPS_IO: enabled
	I1013 22:05:52.654656  517273 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1013 22:05:52.654790  517273 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1013 22:05:52.654898  517273 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1013 22:05:52.664812  517273 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	
	
	==> CRI-O <==
	Oct 13 22:05:16 embed-certs-521669 crio[555]: time="2025-10-13T22:05:16.569780388Z" level=info msg="Started container" PID=1720 containerID=5e0d87998e93d93e30f8d61432686e1e7fea323a52c7d2bc44b17f89cd4b7422 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lshp4/dashboard-metrics-scraper id=9885fa41-89e7-4d24-bf16-ab5eb734a489 name=/runtime.v1.RuntimeService/StartContainer sandboxID=400cb80372776e6c2e43382331e16ac1fd10c9c4b54d438bd7c69a5ae81ded52
	Oct 13 22:05:17 embed-certs-521669 crio[555]: time="2025-10-13T22:05:17.469124565Z" level=info msg="Removing container: 950fb4a15963a3ab99f0025ebd28bf2bade24b1ad6dee6ee02bd84e293d854df" id=e2202320-1cb5-4894-9f78-26235905ad5b name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 13 22:05:17 embed-certs-521669 crio[555]: time="2025-10-13T22:05:17.480187393Z" level=info msg="Removed container 950fb4a15963a3ab99f0025ebd28bf2bade24b1ad6dee6ee02bd84e293d854df: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lshp4/dashboard-metrics-scraper" id=e2202320-1cb5-4894-9f78-26235905ad5b name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 13 22:05:29 embed-certs-521669 crio[555]: time="2025-10-13T22:05:29.507506928Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=8459d21f-3232-45bb-a424-f32874b55697 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:05:29 embed-certs-521669 crio[555]: time="2025-10-13T22:05:29.510304983Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=0c41b1f6-5e0a-45bf-9251-8df9e2e1b1d0 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:05:29 embed-certs-521669 crio[555]: time="2025-10-13T22:05:29.5115904Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=d77f5060-d897-4ad7-b194-429b0d14fd44 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:05:29 embed-certs-521669 crio[555]: time="2025-10-13T22:05:29.511936083Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:05:29 embed-certs-521669 crio[555]: time="2025-10-13T22:05:29.517711751Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:05:29 embed-certs-521669 crio[555]: time="2025-10-13T22:05:29.517985597Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/0be378103d9ac840144298d89187309b5a5dd2b00ad7191be6b507a71ab32500/merged/etc/passwd: no such file or directory"
	Oct 13 22:05:29 embed-certs-521669 crio[555]: time="2025-10-13T22:05:29.518048598Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/0be378103d9ac840144298d89187309b5a5dd2b00ad7191be6b507a71ab32500/merged/etc/group: no such file or directory"
	Oct 13 22:05:29 embed-certs-521669 crio[555]: time="2025-10-13T22:05:29.518398996Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:05:29 embed-certs-521669 crio[555]: time="2025-10-13T22:05:29.55936445Z" level=info msg="Created container f8588d53d142a704b6a3313145b02df3eb18b2272fb5de5e687eadb80a950b3b: kube-system/storage-provisioner/storage-provisioner" id=d77f5060-d897-4ad7-b194-429b0d14fd44 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:05:29 embed-certs-521669 crio[555]: time="2025-10-13T22:05:29.560208038Z" level=info msg="Starting container: f8588d53d142a704b6a3313145b02df3eb18b2272fb5de5e687eadb80a950b3b" id=b1591a0d-8be3-498c-91d5-75d458d84e17 name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 22:05:29 embed-certs-521669 crio[555]: time="2025-10-13T22:05:29.562802623Z" level=info msg="Started container" PID=1734 containerID=f8588d53d142a704b6a3313145b02df3eb18b2272fb5de5e687eadb80a950b3b description=kube-system/storage-provisioner/storage-provisioner id=b1591a0d-8be3-498c-91d5-75d458d84e17 name=/runtime.v1.RuntimeService/StartContainer sandboxID=530784bfeb10b575dc95daa5849904a87e0d13bfd19b0dc5966d8432dc59fb09
	Oct 13 22:05:41 embed-certs-521669 crio[555]: time="2025-10-13T22:05:41.353751031Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=878c6167-cbec-4b68-821f-27047d53df70 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:05:41 embed-certs-521669 crio[555]: time="2025-10-13T22:05:41.357464857Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=5df9cf01-b414-41f0-9306-895a33b05a0f name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:05:41 embed-certs-521669 crio[555]: time="2025-10-13T22:05:41.359480976Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lshp4/dashboard-metrics-scraper" id=1e40a084-3d99-4bc9-977c-b77af1c1b392 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:05:41 embed-certs-521669 crio[555]: time="2025-10-13T22:05:41.359808558Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:05:41 embed-certs-521669 crio[555]: time="2025-10-13T22:05:41.370579361Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:05:41 embed-certs-521669 crio[555]: time="2025-10-13T22:05:41.371349767Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:05:41 embed-certs-521669 crio[555]: time="2025-10-13T22:05:41.417154022Z" level=info msg="Created container 32890e23034691fcd8995f2c2f36cdf5b876b33ba6b110ee02ffd7a8a5b1b672: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lshp4/dashboard-metrics-scraper" id=1e40a084-3d99-4bc9-977c-b77af1c1b392 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:05:41 embed-certs-521669 crio[555]: time="2025-10-13T22:05:41.418364269Z" level=info msg="Starting container: 32890e23034691fcd8995f2c2f36cdf5b876b33ba6b110ee02ffd7a8a5b1b672" id=cfac5671-4829-4a23-918d-217715679026 name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 22:05:41 embed-certs-521669 crio[555]: time="2025-10-13T22:05:41.423151675Z" level=info msg="Started container" PID=1770 containerID=32890e23034691fcd8995f2c2f36cdf5b876b33ba6b110ee02ffd7a8a5b1b672 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lshp4/dashboard-metrics-scraper id=cfac5671-4829-4a23-918d-217715679026 name=/runtime.v1.RuntimeService/StartContainer sandboxID=400cb80372776e6c2e43382331e16ac1fd10c9c4b54d438bd7c69a5ae81ded52
	Oct 13 22:05:41 embed-certs-521669 crio[555]: time="2025-10-13T22:05:41.550283548Z" level=info msg="Removing container: 5e0d87998e93d93e30f8d61432686e1e7fea323a52c7d2bc44b17f89cd4b7422" id=d687e991-6c2f-4fad-9f90-2945abe5438d name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 13 22:05:41 embed-certs-521669 crio[555]: time="2025-10-13T22:05:41.568434165Z" level=info msg="Removed container 5e0d87998e93d93e30f8d61432686e1e7fea323a52c7d2bc44b17f89cd4b7422: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lshp4/dashboard-metrics-scraper" id=d687e991-6c2f-4fad-9f90-2945abe5438d name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	32890e2303469       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           13 seconds ago      Exited              dashboard-metrics-scraper   3                   400cb80372776       dashboard-metrics-scraper-6ffb444bf9-lshp4   kubernetes-dashboard
	f8588d53d142a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           25 seconds ago      Running             storage-provisioner         1                   530784bfeb10b       storage-provisioner                          kube-system
	ddc954a1f166b       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   45 seconds ago      Running             kubernetes-dashboard        0                   0b1791a03dfe8       kubernetes-dashboard-855c9754f9-69m9v        kubernetes-dashboard
	7007fa2f7855e       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           56 seconds ago      Running             busybox                     1                   f4804271464b4       busybox                                      default
	f1a2082cf98ad       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           56 seconds ago      Running             coredns                     0                   3db830775684e       coredns-66bc5c9577-kzq9t                     kube-system
	1f49063ffccfd       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           56 seconds ago      Running             kube-proxy                  0                   5916c60b3452f       kube-proxy-jjzrs                             kube-system
	942193e0f8e22       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           56 seconds ago      Running             kindnet-cni                 0                   c231839291fe4       kindnet-rqr6b                                kube-system
	51119e820cd1b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           56 seconds ago      Exited              storage-provisioner         0                   530784bfeb10b       storage-provisioner                          kube-system
	fdd62b2d9b12e       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           58 seconds ago      Running             kube-scheduler              0                   4e435bf260645       kube-scheduler-embed-certs-521669            kube-system
	dd6ca47e50d2c       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           58 seconds ago      Running             kube-apiserver              0                   afa10fac5e453       kube-apiserver-embed-certs-521669            kube-system
	aee0c2d478b28       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           58 seconds ago      Running             kube-controller-manager     0                   72a739170411e       kube-controller-manager-embed-certs-521669   kube-system
	9380e5f9e72fa       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           59 seconds ago      Running             etcd                        0                   0af436d2ebec3       etcd-embed-certs-521669                      kube-system
	
	
	==> coredns [f1a2082cf98ada2575c55be51a887e685d88ce434c06f68f0414e5a4d53bbaba] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39693 - 59632 "HINFO IN 6179617301028422114.1954051897100138550. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.062655421s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-521669
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-521669
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=18273cc699fc357fbf4f93654efe4966698a9f22
	                    minikube.k8s.io/name=embed-certs-521669
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_13T22_03_33_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 13 Oct 2025 22:03:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-521669
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 13 Oct 2025 22:05:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 13 Oct 2025 22:05:28 +0000   Mon, 13 Oct 2025 22:03:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 13 Oct 2025 22:05:28 +0000   Mon, 13 Oct 2025 22:03:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 13 Oct 2025 22:05:28 +0000   Mon, 13 Oct 2025 22:03:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 13 Oct 2025 22:05:28 +0000   Mon, 13 Oct 2025 22:04:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    embed-certs-521669
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863444Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863444Ki
	  pods:               110
	System Info:
	  Machine ID:                 66f4c9907daa2bafbf78246f68ed04ba
	  System UUID:                3d04c77e-97c4-4463-b7c6-6837fef5c3d8
	  Boot ID:                    981b04c4-08c1-4321-af44-0fa889f5f1d8
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-66bc5c9577-kzq9t                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     2m16s
	  kube-system                 etcd-embed-certs-521669                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m22s
	  kube-system                 kindnet-rqr6b                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2m16s
	  kube-system                 kube-apiserver-embed-certs-521669             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 kube-controller-manager-embed-certs-521669    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m22s
	  kube-system                 kube-proxy-jjzrs                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m16s
	  kube-system                 kube-scheduler-embed-certs-521669             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m22s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m15s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-lshp4    0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-69m9v         0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m15s                  kube-proxy       
	  Normal  Starting                 55s                    kube-proxy       
	  Normal  Starting                 2m26s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m26s (x8 over 2m26s)  kubelet          Node embed-certs-521669 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m26s (x8 over 2m26s)  kubelet          Node embed-certs-521669 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m26s (x8 over 2m26s)  kubelet          Node embed-certs-521669 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m22s                  kubelet          Node embed-certs-521669 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m22s                  kubelet          Node embed-certs-521669 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m22s                  kubelet          Node embed-certs-521669 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m22s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           2m17s                  node-controller  Node embed-certs-521669 event: Registered Node embed-certs-521669 in Controller
	  Normal  NodeReady                95s                    kubelet          Node embed-certs-521669 status is now: NodeReady
	  Normal  Starting                 59s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  59s (x8 over 59s)      kubelet          Node embed-certs-521669 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    59s (x8 over 59s)      kubelet          Node embed-certs-521669 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     59s (x8 over 59s)      kubelet          Node embed-certs-521669 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           53s                    node-controller  Node embed-certs-521669 event: Registered Node embed-certs-521669 in Controller
	
	
	==> dmesg <==
	[  +0.099627] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.027042] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.612816] kauditd_printk_skb: 47 callbacks suppressed
	[Oct13 21:21] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +1.026013] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +1.023870] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +1.023931] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +1.023844] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +1.023883] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +2.047734] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +4.031606] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[  +8.511203] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[ +16.382178] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	[Oct13 21:22] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 26 25 7a 9b bc f8 3a de 32 cc a7 8d 08 00
	
	
	==> etcd [9380e5f9e72fadb5e073fb6200b1804c022f9df9694c1163e541594da8527714] <==
	{"level":"warn","ts":"2025-10-13T22:05:02.406366Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"309.714387ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/replicasets/kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9\" limit:1 ","response":"range_response_count:1 size:2973"}
	{"level":"info","ts":"2025-10-13T22:05:02.406447Z","caller":"traceutil/trace.go:172","msg":"trace[1678593396] range","detail":"{range_begin:/registry/replicasets/kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9; range_end:; response_count:1; response_revision:536; }","duration":"309.807995ms","start":"2025-10-13T22:05:02.096623Z","end":"2025-10-13T22:05:02.406431Z","steps":["trace[1678593396] 'agreement among raft nodes before linearized reading'  (duration: 43.777322ms)","trace[1678593396] 'range keys from in-memory index tree'  (duration: 265.896602ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-13T22:05:02.406486Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-13T22:05:02.096608Z","time spent":"309.863844ms","remote":"127.0.0.1:37350","response type":"/etcdserverpb.KV/Range","request count":0,"request size":83,"response count":1,"response size":2996,"request content":"key:\"/registry/replicasets/kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9\" limit:1 "}
	{"level":"warn","ts":"2025-10-13T22:05:02.407147Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"266.06116ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873789312747368227 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kubernetes-dashboard/kubernetes-dashboard-855c9754f9-69m9v\" mod_revision:530 > success:<request_put:<key:\"/registry/pods/kubernetes-dashboard/kubernetes-dashboard-855c9754f9-69m9v\" value_size:2754 >> failure:<request_range:<key:\"/registry/pods/kubernetes-dashboard/kubernetes-dashboard-855c9754f9-69m9v\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-10-13T22:05:02.407434Z","caller":"traceutil/trace.go:172","msg":"trace[1627572079] transaction","detail":"{read_only:false; response_revision:541; number_of_response:1; }","duration":"311.70827ms","start":"2025-10-13T22:05:02.095715Z","end":"2025-10-13T22:05:02.407423Z","steps":["trace[1627572079] 'process raft request'  (duration: 311.683576ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-13T22:05:02.407497Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-13T22:05:02.095698Z","time spent":"311.768276ms","remote":"127.0.0.1:36522","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":783,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/kubernetes-dashboard/kubernetes-dashboard-855c9754f9.186e2c330af945f6\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kubernetes-dashboard/kubernetes-dashboard-855c9754f9.186e2c330af945f6\" value_size:679 lease:4650417275892592377 >> failure:<>"}
	{"level":"info","ts":"2025-10-13T22:05:02.407659Z","caller":"traceutil/trace.go:172","msg":"trace[364711714] transaction","detail":"{read_only:false; response_revision:537; number_of_response:1; }","duration":"317.786405ms","start":"2025-10-13T22:05:02.089860Z","end":"2025-10-13T22:05:02.407646Z","steps":["trace[364711714] 'process raft request'  (duration: 50.604021ms)","trace[364711714] 'compare'  (duration: 265.968288ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-13T22:05:02.407729Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-13T22:05:02.089797Z","time spent":"317.887112ms","remote":"127.0.0.1:36744","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":2835,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kubernetes-dashboard/kubernetes-dashboard-855c9754f9-69m9v\" mod_revision:530 > success:<request_put:<key:\"/registry/pods/kubernetes-dashboard/kubernetes-dashboard-855c9754f9-69m9v\" value_size:2754 >> failure:<request_range:<key:\"/registry/pods/kubernetes-dashboard/kubernetes-dashboard-855c9754f9-69m9v\" > >"}
	{"level":"info","ts":"2025-10-13T22:05:02.407844Z","caller":"traceutil/trace.go:172","msg":"trace[1935186855] linearizableReadLoop","detail":"{readStateIndex:569; appliedIndex:568; }","duration":"267.457417ms","start":"2025-10-13T22:05:02.140377Z","end":"2025-10-13T22:05:02.407835Z","steps":["trace[1935186855] 'read index received'  (duration: 143.857856ms)","trace[1935186855] 'applied index is now lower than readState.Index'  (duration: 123.59875ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-13T22:05:02.407978Z","caller":"traceutil/trace.go:172","msg":"trace[1576442207] transaction","detail":"{read_only:false; response_revision:538; number_of_response:1; }","duration":"316.786478ms","start":"2025-10-13T22:05:02.091169Z","end":"2025-10-13T22:05:02.407955Z","steps":["trace[1576442207] 'process raft request'  (duration: 316.068508ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-13T22:05:02.408049Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-13T22:05:02.091151Z","time spent":"316.863631ms","remote":"127.0.0.1:37350","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3078,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/replicasets/kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" mod_revision:523 > success:<request_put:<key:\"/registry/replicasets/kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" value_size:2996 >> failure:<request_range:<key:\"/registry/replicasets/kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" > >"}
	{"level":"info","ts":"2025-10-13T22:05:02.408238Z","caller":"traceutil/trace.go:172","msg":"trace[88820641] transaction","detail":"{read_only:false; response_revision:539; number_of_response:1; }","duration":"316.581514ms","start":"2025-10-13T22:05:02.091647Z","end":"2025-10-13T22:05:02.408228Z","steps":["trace[88820641] 'process raft request'  (duration: 315.677827ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-13T22:05:02.408288Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-13T22:05:02.091631Z","time spent":"316.630449ms","remote":"127.0.0.1:37296","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4879,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/deployments/kubernetes-dashboard/kubernetes-dashboard\" mod_revision:527 > success:<request_put:<key:\"/registry/deployments/kubernetes-dashboard/kubernetes-dashboard\" value_size:4808 >> failure:<request_range:<key:\"/registry/deployments/kubernetes-dashboard/kubernetes-dashboard\" > >"}
	{"level":"info","ts":"2025-10-13T22:05:02.408405Z","caller":"traceutil/trace.go:172","msg":"trace[1654464183] transaction","detail":"{read_only:false; response_revision:540; number_of_response:1; }","duration":"312.949921ms","start":"2025-10-13T22:05:02.095446Z","end":"2025-10-13T22:05:02.408396Z","steps":["trace[1654464183] 'process raft request'  (duration: 311.922127ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-13T22:05:02.408455Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-13T22:05:02.095428Z","time spent":"312.997591ms","remote":"127.0.0.1:37434","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":835,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lshp4.186e2c330b50b7c5\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lshp4.186e2c330b50b7c5\" value_size:720 lease:4650417275892592176 >> failure:<>"}
	{"level":"warn","ts":"2025-10-13T22:05:02.408641Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"183.337108ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lshp4\" limit:1 ","response":"range_response_count:1 size:2782"}
	{"level":"info","ts":"2025-10-13T22:05:02.408694Z","caller":"traceutil/trace.go:172","msg":"trace[249669695] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lshp4; range_end:; response_count:1; response_revision:541; }","duration":"183.388688ms","start":"2025-10-13T22:05:02.225288Z","end":"2025-10-13T22:05:02.408676Z","steps":["trace[249669695] 'agreement among raft nodes before linearized reading'  (duration: 183.257567ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-13T22:05:02.408795Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"301.069554ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-kzq9t\" limit:1 ","response":"range_response_count:1 size:5936"}
	{"level":"info","ts":"2025-10-13T22:05:02.408830Z","caller":"traceutil/trace.go:172","msg":"trace[445184127] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-kzq9t; range_end:; response_count:1; response_revision:541; }","duration":"301.110323ms","start":"2025-10-13T22:05:02.107711Z","end":"2025-10-13T22:05:02.408821Z","steps":["trace[445184127] 'agreement among raft nodes before linearized reading'  (duration: 300.984114ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-13T22:05:02.408852Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-13T22:05:02.107693Z","time spent":"301.152528ms","remote":"127.0.0.1:36744","response type":"/etcdserverpb.KV/Range","request count":0,"request size":55,"response count":1,"response size":5959,"request content":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-kzq9t\" limit:1 "}
	{"level":"info","ts":"2025-10-13T22:05:02.654031Z","caller":"traceutil/trace.go:172","msg":"trace[140611527] transaction","detail":"{read_only:false; response_revision:546; number_of_response:1; }","duration":"175.710479ms","start":"2025-10-13T22:05:02.478298Z","end":"2025-10-13T22:05:02.654008Z","steps":["trace[140611527] 'process raft request'  (duration: 137.097226ms)","trace[140611527] 'compare'  (duration: 38.358038ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-13T22:05:02.654040Z","caller":"traceutil/trace.go:172","msg":"trace[1763462367] transaction","detail":"{read_only:false; response_revision:547; number_of_response:1; }","duration":"175.579411ms","start":"2025-10-13T22:05:02.478448Z","end":"2025-10-13T22:05:02.654027Z","steps":["trace[1763462367] 'process raft request'  (duration: 175.44056ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-13T22:05:02.654100Z","caller":"traceutil/trace.go:172","msg":"trace[1929973869] transaction","detail":"{read_only:false; response_revision:548; number_of_response:1; }","duration":"174.033511ms","start":"2025-10-13T22:05:02.480059Z","end":"2025-10-13T22:05:02.654093Z","steps":["trace[1929973869] 'process raft request'  (duration: 173.901782ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-13T22:05:03.644234Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"127.717222ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873789312747368259 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/embed-certs-521669.186e2c317923066d\" mod_revision:551 > success:<request_put:<key:\"/registry/events/default/embed-certs-521669.186e2c317923066d\" value_size:630 lease:4650417275892592377 >> failure:<request_range:<key:\"/registry/events/default/embed-certs-521669.186e2c317923066d\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-10-13T22:05:03.644369Z","caller":"traceutil/trace.go:172","msg":"trace[1300980893] transaction","detail":"{read_only:false; response_revision:554; number_of_response:1; }","duration":"253.505623ms","start":"2025-10-13T22:05:03.390833Z","end":"2025-10-13T22:05:03.644339Z","steps":["trace[1300980893] 'process raft request'  (duration: 125.604794ms)","trace[1300980893] 'compare'  (duration: 127.594746ms)"],"step_count":2}
	
	
	==> kernel <==
	 22:05:55 up  1:48,  0 user,  load average: 6.02, 4.38, 5.88
	Linux embed-certs-521669 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [942193e0f8e228dbe430e60585172509fea39415b4683743cc8575fdd693853a] <==
	I1013 22:04:58.970223       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1013 22:04:58.970527       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1013 22:04:58.970721       1 main.go:148] setting mtu 1500 for CNI 
	I1013 22:04:58.970736       1 main.go:178] kindnetd IP family: "ipv4"
	I1013 22:04:58.970767       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-13T22:04:59Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1013 22:04:59.273696       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1013 22:04:59.273777       1 controller.go:381] "Waiting for informer caches to sync"
	I1013 22:04:59.273791       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1013 22:04:59.273924       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1013 22:04:59.378287       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1013 22:04:59.469740       1 metrics.go:72] Registering metrics
	I1013 22:04:59.469870       1 controller.go:711] "Syncing nftables rules"
	I1013 22:05:09.180167       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1013 22:05:09.180249       1 main.go:301] handling current node
	I1013 22:05:19.180134       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1013 22:05:19.180188       1 main.go:301] handling current node
	I1013 22:05:29.180768       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1013 22:05:29.180808       1 main.go:301] handling current node
	I1013 22:05:39.182146       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1013 22:05:39.182181       1 main.go:301] handling current node
	I1013 22:05:49.181086       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1013 22:05:49.181129       1 main.go:301] handling current node
	
	
	==> kube-apiserver [dd6ca47e50d2cbd68431e1f5ab00c476734d1abae5ea035e8079056054b006bb] <==
	I1013 22:04:57.983493       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1013 22:04:57.984043       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1013 22:04:57.984133       1 aggregator.go:171] initial CRD sync complete...
	I1013 22:04:57.984144       1 autoregister_controller.go:144] Starting autoregister controller
	I1013 22:04:57.984150       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1013 22:04:57.984163       1 cache.go:39] Caches are synced for autoregister controller
	I1013 22:04:57.984429       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1013 22:04:57.985435       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1013 22:04:57.985497       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1013 22:04:57.991106       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1013 22:04:57.995230       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1013 22:04:58.010368       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1013 22:04:58.010468       1 policy_source.go:240] refreshing policies
	I1013 22:04:58.038285       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1013 22:04:58.301695       1 controller.go:667] quota admission added evaluator for: namespaces
	I1013 22:04:58.345268       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1013 22:04:58.348303       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1013 22:04:58.391535       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1013 22:04:58.401394       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1013 22:04:58.457215       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.109.255"}
	I1013 22:04:58.468282       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.97.190.76"}
	I1013 22:04:58.887659       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1013 22:05:01.407136       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1013 22:05:01.704035       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1013 22:05:01.867428       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [aee0c2d478b28876e3d4fc00fe5f4d69ca458ac596bdc766a2e18070947e0fc8] <==
	I1013 22:05:01.300378       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1013 22:05:01.300461       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1013 22:05:01.300469       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1013 22:05:01.300577       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1013 22:05:01.300726       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1013 22:05:01.301214       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1013 22:05:01.301315       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1013 22:05:01.303971       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1013 22:05:01.304014       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1013 22:05:01.307335       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 22:05:01.307355       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1013 22:05:01.307360       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1013 22:05:01.307367       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1013 22:05:01.307388       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1013 22:05:01.307395       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1013 22:05:01.307622       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1013 22:05:01.307769       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1013 22:05:01.310372       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1013 22:05:01.313404       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1013 22:05:01.313503       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1013 22:05:01.313588       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-521669"
	I1013 22:05:01.313635       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1013 22:05:01.317750       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1013 22:05:01.320619       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 22:05:01.321717       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	
	
	==> kube-proxy [1f49063ffccfd9f6190201e8082032d4920f99c8dc4110db28267978196f15df] <==
	I1013 22:04:58.811429       1 server_linux.go:53] "Using iptables proxy"
	I1013 22:04:58.888371       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1013 22:04:58.988571       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1013 22:04:58.988614       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1013 22:04:58.988739       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1013 22:04:59.015185       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1013 22:04:59.015248       1 server_linux.go:132] "Using iptables Proxier"
	I1013 22:04:59.022602       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1013 22:04:59.023499       1 server.go:527] "Version info" version="v1.34.1"
	I1013 22:04:59.023528       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 22:04:59.025011       1 config.go:200] "Starting service config controller"
	I1013 22:04:59.025037       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1013 22:04:59.025507       1 config.go:106] "Starting endpoint slice config controller"
	I1013 22:04:59.025515       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1013 22:04:59.025532       1 config.go:403] "Starting serviceCIDR config controller"
	I1013 22:04:59.025537       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1013 22:04:59.025556       1 config.go:309] "Starting node config controller"
	I1013 22:04:59.025561       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1013 22:04:59.126070       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1013 22:04:59.126104       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1013 22:04:59.126108       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1013 22:04:59.126122       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [fdd62b2d9b12e7b64a03352f0d267662da3aa571a99ec9ecfb273dbe33b29f29] <==
	I1013 22:04:56.614100       1 serving.go:386] Generated self-signed cert in-memory
	I1013 22:04:57.977441       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1013 22:04:57.977473       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 22:04:57.984138       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1013 22:04:57.984853       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1013 22:04:57.984724       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1013 22:04:57.984786       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1013 22:04:57.985811       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1013 22:04:57.984759       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1013 22:04:57.984774       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 22:04:57.993916       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 22:04:58.085902       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1013 22:04:58.086871       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1013 22:04:58.095107       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 13 22:05:05 embed-certs-521669 kubelet[708]: I1013 22:05:05.732242     708 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 13 22:05:06 embed-certs-521669 kubelet[708]: I1013 22:05:06.424880     708 scope.go:117] "RemoveContainer" containerID="f6ace8ed9a4db5040519fc603026fd19c129da7f02e847c0ec69e722c6721eb5"
	Oct 13 22:05:07 embed-certs-521669 kubelet[708]: I1013 22:05:07.430960     708 scope.go:117] "RemoveContainer" containerID="f6ace8ed9a4db5040519fc603026fd19c129da7f02e847c0ec69e722c6721eb5"
	Oct 13 22:05:07 embed-certs-521669 kubelet[708]: I1013 22:05:07.431137     708 scope.go:117] "RemoveContainer" containerID="950fb4a15963a3ab99f0025ebd28bf2bade24b1ad6dee6ee02bd84e293d854df"
	Oct 13 22:05:07 embed-certs-521669 kubelet[708]: E1013 22:05:07.431348     708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-lshp4_kubernetes-dashboard(1ba2bc2f-7557-44ac-b8f8-e7cd2e592ad5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lshp4" podUID="1ba2bc2f-7557-44ac-b8f8-e7cd2e592ad5"
	Oct 13 22:05:08 embed-certs-521669 kubelet[708]: I1013 22:05:08.436384     708 scope.go:117] "RemoveContainer" containerID="950fb4a15963a3ab99f0025ebd28bf2bade24b1ad6dee6ee02bd84e293d854df"
	Oct 13 22:05:08 embed-certs-521669 kubelet[708]: E1013 22:05:08.436593     708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-lshp4_kubernetes-dashboard(1ba2bc2f-7557-44ac-b8f8-e7cd2e592ad5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lshp4" podUID="1ba2bc2f-7557-44ac-b8f8-e7cd2e592ad5"
	Oct 13 22:05:10 embed-certs-521669 kubelet[708]: I1013 22:05:10.455692     708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-69m9v" podStartSLOduration=1.651877776 podStartE2EDuration="8.455666969s" podCreationTimestamp="2025-10-13 22:05:02 +0000 UTC" firstStartedPulling="2025-10-13 22:05:02.962578188 +0000 UTC m=+7.732734785" lastFinishedPulling="2025-10-13 22:05:09.766367392 +0000 UTC m=+14.536523978" observedRunningTime="2025-10-13 22:05:10.455244647 +0000 UTC m=+15.225401252" watchObservedRunningTime="2025-10-13 22:05:10.455666969 +0000 UTC m=+15.225823573"
	Oct 13 22:05:16 embed-certs-521669 kubelet[708]: I1013 22:05:16.515225     708 scope.go:117] "RemoveContainer" containerID="950fb4a15963a3ab99f0025ebd28bf2bade24b1ad6dee6ee02bd84e293d854df"
	Oct 13 22:05:17 embed-certs-521669 kubelet[708]: I1013 22:05:17.467652     708 scope.go:117] "RemoveContainer" containerID="950fb4a15963a3ab99f0025ebd28bf2bade24b1ad6dee6ee02bd84e293d854df"
	Oct 13 22:05:17 embed-certs-521669 kubelet[708]: I1013 22:05:17.467903     708 scope.go:117] "RemoveContainer" containerID="5e0d87998e93d93e30f8d61432686e1e7fea323a52c7d2bc44b17f89cd4b7422"
	Oct 13 22:05:17 embed-certs-521669 kubelet[708]: E1013 22:05:17.468143     708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-lshp4_kubernetes-dashboard(1ba2bc2f-7557-44ac-b8f8-e7cd2e592ad5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lshp4" podUID="1ba2bc2f-7557-44ac-b8f8-e7cd2e592ad5"
	Oct 13 22:05:26 embed-certs-521669 kubelet[708]: I1013 22:05:26.515279     708 scope.go:117] "RemoveContainer" containerID="5e0d87998e93d93e30f8d61432686e1e7fea323a52c7d2bc44b17f89cd4b7422"
	Oct 13 22:05:26 embed-certs-521669 kubelet[708]: E1013 22:05:26.515499     708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-lshp4_kubernetes-dashboard(1ba2bc2f-7557-44ac-b8f8-e7cd2e592ad5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lshp4" podUID="1ba2bc2f-7557-44ac-b8f8-e7cd2e592ad5"
	Oct 13 22:05:29 embed-certs-521669 kubelet[708]: I1013 22:05:29.506794     708 scope.go:117] "RemoveContainer" containerID="51119e820cd1b0834228a2770ec00edf3d21ca637bc49ffae945718586b6a219"
	Oct 13 22:05:41 embed-certs-521669 kubelet[708]: I1013 22:05:41.352689     708 scope.go:117] "RemoveContainer" containerID="5e0d87998e93d93e30f8d61432686e1e7fea323a52c7d2bc44b17f89cd4b7422"
	Oct 13 22:05:41 embed-certs-521669 kubelet[708]: I1013 22:05:41.547669     708 scope.go:117] "RemoveContainer" containerID="5e0d87998e93d93e30f8d61432686e1e7fea323a52c7d2bc44b17f89cd4b7422"
	Oct 13 22:05:41 embed-certs-521669 kubelet[708]: I1013 22:05:41.548124     708 scope.go:117] "RemoveContainer" containerID="32890e23034691fcd8995f2c2f36cdf5b876b33ba6b110ee02ffd7a8a5b1b672"
	Oct 13 22:05:41 embed-certs-521669 kubelet[708]: E1013 22:05:41.548336     708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-lshp4_kubernetes-dashboard(1ba2bc2f-7557-44ac-b8f8-e7cd2e592ad5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lshp4" podUID="1ba2bc2f-7557-44ac-b8f8-e7cd2e592ad5"
	Oct 13 22:05:46 embed-certs-521669 kubelet[708]: I1013 22:05:46.514937     708 scope.go:117] "RemoveContainer" containerID="32890e23034691fcd8995f2c2f36cdf5b876b33ba6b110ee02ffd7a8a5b1b672"
	Oct 13 22:05:46 embed-certs-521669 kubelet[708]: E1013 22:05:46.515156     708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-lshp4_kubernetes-dashboard(1ba2bc2f-7557-44ac-b8f8-e7cd2e592ad5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lshp4" podUID="1ba2bc2f-7557-44ac-b8f8-e7cd2e592ad5"
	Oct 13 22:05:49 embed-certs-521669 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 13 22:05:50 embed-certs-521669 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 13 22:05:50 embed-certs-521669 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 13 22:05:50 embed-certs-521669 systemd[1]: kubelet.service: Consumed 1.879s CPU time.
	
	
	==> kubernetes-dashboard [ddc954a1f166be754e4eb7e65b3e26d4f213b366dfcb0dee4876ade24670515c] <==
	2025/10/13 22:05:09 Using namespace: kubernetes-dashboard
	2025/10/13 22:05:09 Using in-cluster config to connect to apiserver
	2025/10/13 22:05:09 Using secret token for csrf signing
	2025/10/13 22:05:09 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/13 22:05:09 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/13 22:05:09 Successful initial request to the apiserver, version: v1.34.1
	2025/10/13 22:05:09 Generating JWE encryption key
	2025/10/13 22:05:09 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/13 22:05:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/13 22:05:09 Initializing JWE encryption key from synchronized object
	2025/10/13 22:05:09 Creating in-cluster Sidecar client
	2025/10/13 22:05:09 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/13 22:05:09 Serving insecurely on HTTP port: 9090
	2025/10/13 22:05:39 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/13 22:05:09 Starting overwatch
	
	
	==> storage-provisioner [51119e820cd1b0834228a2770ec00edf3d21ca637bc49ffae945718586b6a219] <==
	I1013 22:04:58.741754       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1013 22:05:28.744982       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [f8588d53d142a704b6a3313145b02df3eb18b2272fb5de5e687eadb80a950b3b] <==
	I1013 22:05:29.578440       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1013 22:05:29.587545       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1013 22:05:29.587591       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1013 22:05:29.590893       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:05:33.047638       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:05:37.308446       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:05:40.907672       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:05:43.961498       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:05:46.984358       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:05:46.989520       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1013 22:05:46.989662       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1013 22:05:46.989835       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-521669_163a44f7-0f6d-47c4-96d5-b31e6d0299aa!
	I1013 22:05:46.989816       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2713828c-d71d-46c7-8af8-1b55a2cb8cd7", APIVersion:"v1", ResourceVersion:"675", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-521669_163a44f7-0f6d-47c4-96d5-b31e6d0299aa became leader
	W1013 22:05:46.993445       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:05:47.000968       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1013 22:05:47.090404       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-521669_163a44f7-0f6d-47c4-96d5-b31e6d0299aa!
	W1013 22:05:49.004781       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:05:49.010689       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:05:51.013986       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:05:51.018624       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:05:53.022309       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:05:53.026614       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:05:55.030741       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:05:55.036286       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-521669 -n embed-certs-521669
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-521669 -n embed-certs-521669: exit status 2 (355.437462ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-521669 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (6.39s)
E1013 22:07:05.203167  230929 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/no-preload-080337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:07:06.485261  230929 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/no-preload-080337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    

Test pass (263/327)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 4.42
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.22
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.1/json-events 4.47
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.07
18 TestDownloadOnly/v1.34.1/DeleteAll 0.22
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.14
20 TestDownloadOnlyKic 0.4
21 TestBinaryMirror 0.83
22 TestOffline 62.38
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 153.04
31 TestAddons/serial/GCPAuth/Namespaces 0.14
32 TestAddons/serial/GCPAuth/FakeCredentials 7.44
48 TestAddons/StoppedEnableDisable 16.71
49 TestCertOptions 31.14
50 TestCertExpiration 215.82
52 TestForceSystemdFlag 24.36
53 TestForceSystemdEnv 27.92
55 TestKVMDriverInstallOrUpdate 1.04
59 TestErrorSpam/setup 24.05
60 TestErrorSpam/start 0.66
61 TestErrorSpam/status 0.94
62 TestErrorSpam/pause 6.17
63 TestErrorSpam/unpause 6.01
64 TestErrorSpam/stop 2.59
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 36.76
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 6.35
71 TestFunctional/serial/KubeContext 0.05
72 TestFunctional/serial/KubectlGetPods 0.12
75 TestFunctional/serial/CacheCmd/cache/add_remote 3.04
76 TestFunctional/serial/CacheCmd/cache/add_local 1.16
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
78 TestFunctional/serial/CacheCmd/cache/list 0.05
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.29
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.56
81 TestFunctional/serial/CacheCmd/cache/delete 0.1
82 TestFunctional/serial/MinikubeKubectlCmd 0.12
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
84 TestFunctional/serial/ExtraConfig 68
85 TestFunctional/serial/ComponentHealth 0.07
86 TestFunctional/serial/LogsCmd 1.28
87 TestFunctional/serial/LogsFileCmd 1.28
88 TestFunctional/serial/InvalidService 3.76
90 TestFunctional/parallel/ConfigCmd 0.36
91 TestFunctional/parallel/DashboardCmd 8.96
92 TestFunctional/parallel/DryRun 0.39
93 TestFunctional/parallel/InternationalLanguage 0.18
94 TestFunctional/parallel/StatusCmd 0.93
99 TestFunctional/parallel/AddonsCmd 0.15
100 TestFunctional/parallel/PersistentVolumeClaim 24.26
102 TestFunctional/parallel/SSHCmd 0.54
103 TestFunctional/parallel/CpCmd 1.73
104 TestFunctional/parallel/MySQL 21.26
105 TestFunctional/parallel/FileSync 0.27
106 TestFunctional/parallel/CertSync 1.78
110 TestFunctional/parallel/NodeLabels 0.08
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.58
114 TestFunctional/parallel/License 0.44
116 TestFunctional/parallel/Version/short 0.07
117 TestFunctional/parallel/Version/components 0.63
119 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
120 TestFunctional/parallel/ImageCommands/ImageListJson 0.22
121 TestFunctional/parallel/ImageCommands/ImageListYaml 1.4
122 TestFunctional/parallel/ImageCommands/ImageBuild 2.2
123 TestFunctional/parallel/ImageCommands/Setup 0.99
126 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.52
128 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
130 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.2
133 TestFunctional/parallel/ImageCommands/ImageRemove 0.49
136 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
137 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
141 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
142 TestFunctional/parallel/ProfileCmd/profile_not_create 0.4
143 TestFunctional/parallel/ProfileCmd/profile_list 0.38
144 TestFunctional/parallel/ProfileCmd/profile_json_output 0.38
145 TestFunctional/parallel/MountCmd/any-port 5.84
146 TestFunctional/parallel/MountCmd/specific-port 1.85
147 TestFunctional/parallel/MountCmd/VerifyCleanup 1.92
148 TestFunctional/parallel/UpdateContextCmd/no_changes 0.14
149 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.14
150 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.13
151 TestFunctional/parallel/ServiceCmd/List 1.7
152 TestFunctional/parallel/ServiceCmd/JSONOutput 1.69
156 TestFunctional/delete_echo-server_images 0.04
157 TestFunctional/delete_my-image_image 0.02
158 TestFunctional/delete_minikube_cached_images 0.02
163 TestMultiControlPlane/serial/StartCluster 133.39
164 TestMultiControlPlane/serial/DeployApp 6.16
165 TestMultiControlPlane/serial/PingHostFromPods 0.99
166 TestMultiControlPlane/serial/AddWorkerNode 24.29
167 TestMultiControlPlane/serial/NodeLabels 0.07
168 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.91
169 TestMultiControlPlane/serial/CopyFile 17.37
170 TestMultiControlPlane/serial/StopSecondaryNode 13.35
171 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.72
172 TestMultiControlPlane/serial/RestartSecondaryNode 14.22
173 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.91
174 TestMultiControlPlane/serial/RestartClusterKeepsNodes 117.19
175 TestMultiControlPlane/serial/DeleteSecondaryNode 10.12
176 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.7
177 TestMultiControlPlane/serial/StopCluster 41.68
178 TestMultiControlPlane/serial/RestartCluster 56.77
179 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.7
180 TestMultiControlPlane/serial/AddSecondaryNode 67.07
181 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.9
185 TestJSONOutput/start/Command 38.21
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 8
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.22
210 TestKicCustomNetwork/create_custom_network 28.17
211 TestKicCustomNetwork/use_default_bridge_network 26.81
212 TestKicExistingNetwork 25.1
213 TestKicCustomSubnet 24.65
214 TestKicStaticIP 25.73
215 TestMainNoArgs 0.05
216 TestMinikubeProfile 45.73
219 TestMountStart/serial/StartWithMountFirst 5.46
220 TestMountStart/serial/VerifyMountFirst 0.27
221 TestMountStart/serial/StartWithMountSecond 5.93
222 TestMountStart/serial/VerifyMountSecond 0.27
223 TestMountStart/serial/DeleteFirst 1.7
224 TestMountStart/serial/VerifyMountPostDelete 0.26
225 TestMountStart/serial/Stop 1.25
226 TestMountStart/serial/RestartStopped 7.25
227 TestMountStart/serial/VerifyMountPostStop 0.27
230 TestMultiNode/serial/FreshStart2Nodes 61.84
231 TestMultiNode/serial/DeployApp2Nodes 3.48
232 TestMultiNode/serial/PingHostFrom2Pods 0.67
233 TestMultiNode/serial/AddNode 24.37
234 TestMultiNode/serial/MultiNodeLabels 0.07
235 TestMultiNode/serial/ProfileList 0.66
236 TestMultiNode/serial/CopyFile 9.6
237 TestMultiNode/serial/StopNode 2.25
238 TestMultiNode/serial/StartAfterStop 7.29
239 TestMultiNode/serial/RestartKeepsNodes 57.51
240 TestMultiNode/serial/DeleteNode 5.04
241 TestMultiNode/serial/StopMultiNode 28.57
242 TestMultiNode/serial/RestartMultiNode 34.93
243 TestMultiNode/serial/ValidateNameConflict 23.31
248 TestPreload 107.19
250 TestScheduledStopUnix 98.02
253 TestInsufficientStorage 9.81
254 TestRunningBinaryUpgrade 48.51
256 TestKubernetesUpgrade 318.61
257 TestMissingContainerUpgrade 91
258 TestStoppedBinaryUpgrade/Setup 0.49
259 TestStoppedBinaryUpgrade/Upgrade 73.28
260 TestStoppedBinaryUpgrade/MinikubeLogs 1.04
269 TestPause/serial/Start 43.56
271 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
272 TestNoKubernetes/serial/StartWithK8s 24.07
273 TestNoKubernetes/serial/StartWithStopK8s 29.5
274 TestPause/serial/SecondStartNoReconfiguration 5.96
276 TestNoKubernetes/serial/Start 4.73
277 TestNoKubernetes/serial/VerifyK8sNotRunning 0.29
278 TestNoKubernetes/serial/ProfileList 1.82
279 TestNoKubernetes/serial/Stop 1.25
280 TestNoKubernetes/serial/StartNoArgs 6.8
281 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.34
289 TestNetworkPlugins/group/false 3.51
294 TestStartStop/group/old-k8s-version/serial/FirstStart 47.77
296 TestStartStop/group/no-preload/serial/FirstStart 51.31
297 TestStartStop/group/old-k8s-version/serial/DeployApp 9.32
299 TestStartStop/group/old-k8s-version/serial/Stop 15.99
300 TestStartStop/group/no-preload/serial/DeployApp 7.24
301 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
302 TestStartStop/group/old-k8s-version/serial/SecondStart 42.9
304 TestStartStop/group/no-preload/serial/Stop 18.07
305 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
306 TestStartStop/group/no-preload/serial/SecondStart 47.35
307 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
308 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
309 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.26
312 TestStartStop/group/embed-certs/serial/FirstStart 70.65
314 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 42.87
315 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
316 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
317 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
320 TestStartStop/group/newest-cni/serial/FirstStart 27.28
321 TestNetworkPlugins/group/auto/Start 39.14
322 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 7.28
324 TestStartStop/group/newest-cni/serial/DeployApp 0
326 TestStartStop/group/default-k8s-diff-port/serial/Stop 18.19
327 TestStartStop/group/newest-cni/serial/Stop 7.98
328 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
329 TestStartStop/group/newest-cni/serial/SecondStart 10.68
330 TestStartStop/group/embed-certs/serial/DeployApp 7.28
331 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
332 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 49.86
333 TestNetworkPlugins/group/auto/KubeletFlags 0.37
334 TestNetworkPlugins/group/auto/NetCatPod 9.27
335 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
336 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
337 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.31
340 TestStartStop/group/embed-certs/serial/Stop 16.84
341 TestNetworkPlugins/group/auto/DNS 0.14
342 TestNetworkPlugins/group/auto/Localhost 0.11
343 TestNetworkPlugins/group/auto/HairPin 0.13
344 TestNetworkPlugins/group/kindnet/Start 47.79
345 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.22
346 TestStartStop/group/embed-certs/serial/SecondStart 49.54
347 TestNetworkPlugins/group/calico/Start 52.62
348 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
349 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.11
350 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
351 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.27
353 TestNetworkPlugins/group/kindnet/KubeletFlags 0.3
354 TestNetworkPlugins/group/kindnet/NetCatPod 9.29
355 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
356 TestNetworkPlugins/group/custom-flannel/Start 50.81
357 TestNetworkPlugins/group/kindnet/DNS 0.17
358 TestNetworkPlugins/group/kindnet/Localhost 0.15
359 TestNetworkPlugins/group/kindnet/HairPin 0.12
360 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.11
361 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.3
363 TestNetworkPlugins/group/calico/ControllerPod 6.01
364 TestNetworkPlugins/group/calico/KubeletFlags 0.34
365 TestNetworkPlugins/group/calico/NetCatPod 10.21
366 TestNetworkPlugins/group/enable-default-cni/Start 68.57
367 TestNetworkPlugins/group/flannel/Start 50.97
368 TestNetworkPlugins/group/calico/DNS 0.15
369 TestNetworkPlugins/group/calico/Localhost 0.1
370 TestNetworkPlugins/group/calico/HairPin 0.1
371 TestNetworkPlugins/group/bridge/Start 71.27
372 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.31
373 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.24
374 TestNetworkPlugins/group/custom-flannel/DNS 0.12
375 TestNetworkPlugins/group/custom-flannel/Localhost 0.09
376 TestNetworkPlugins/group/custom-flannel/HairPin 0.11
377 TestNetworkPlugins/group/flannel/ControllerPod 6.01
378 TestNetworkPlugins/group/flannel/KubeletFlags 0.31
379 TestNetworkPlugins/group/flannel/NetCatPod 8.19
380 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.29
381 TestNetworkPlugins/group/enable-default-cni/NetCatPod 8.19
382 TestNetworkPlugins/group/flannel/DNS 0.14
383 TestNetworkPlugins/group/flannel/Localhost 0.1
384 TestNetworkPlugins/group/flannel/HairPin 0.12
385 TestNetworkPlugins/group/enable-default-cni/DNS 0.12
386 TestNetworkPlugins/group/enable-default-cni/Localhost 0.1
387 TestNetworkPlugins/group/enable-default-cni/HairPin 0.1
388 TestNetworkPlugins/group/bridge/KubeletFlags 0.29
389 TestNetworkPlugins/group/bridge/NetCatPod 9.19
390 TestNetworkPlugins/group/bridge/DNS 0.12
391 TestNetworkPlugins/group/bridge/Localhost 0.09
392 TestNetworkPlugins/group/bridge/HairPin 0.09
x
+
TestDownloadOnly/v1.28.0/json-events (4.42s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-941848 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-941848 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.422847228s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (4.42s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1013 21:18:22.202415  230929 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1013 21:18:22.202521  230929 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-226873/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-941848
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-941848: exit status 85 (71.241169ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-941848 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-941848 │ jenkins │ v1.37.0 │ 13 Oct 25 21:18 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 21:18:17
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 21:18:17.824647  230941 out.go:360] Setting OutFile to fd 1 ...
	I1013 21:18:17.824943  230941 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:18:17.824956  230941 out.go:374] Setting ErrFile to fd 2...
	I1013 21:18:17.824960  230941 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:18:17.825359  230941 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-226873/.minikube/bin
	W1013 21:18:17.825560  230941 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21724-226873/.minikube/config/config.json: open /home/jenkins/minikube-integration/21724-226873/.minikube/config/config.json: no such file or directory
	I1013 21:18:17.826227  230941 out.go:368] Setting JSON to true
	I1013 21:18:17.828721  230941 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3646,"bootTime":1760386652,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1013 21:18:17.828848  230941 start.go:141] virtualization: kvm guest
	I1013 21:18:17.831291  230941 out.go:99] [download-only-941848] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1013 21:18:17.831496  230941 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21724-226873/.minikube/cache/preloaded-tarball: no such file or directory
	I1013 21:18:17.831549  230941 notify.go:220] Checking for updates...
	I1013 21:18:17.833307  230941 out.go:171] MINIKUBE_LOCATION=21724
	I1013 21:18:17.835312  230941 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 21:18:17.837076  230941 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21724-226873/kubeconfig
	I1013 21:18:17.838901  230941 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-226873/.minikube
	I1013 21:18:17.840526  230941 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1013 21:18:17.846869  230941 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1013 21:18:17.847224  230941 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 21:18:17.871513  230941 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1013 21:18:17.871632  230941 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 21:18:18.294765  230941 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:46 SystemTime:2025-10-13 21:18:18.283604069 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652166656 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1013 21:18:18.294929  230941 docker.go:318] overlay module found
	I1013 21:18:18.296829  230941 out.go:99] Using the docker driver based on user configuration
	I1013 21:18:18.296875  230941 start.go:305] selected driver: docker
	I1013 21:18:18.296884  230941 start.go:925] validating driver "docker" against <nil>
	I1013 21:18:18.297023  230941 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 21:18:18.353202  230941 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:46 SystemTime:2025-10-13 21:18:18.3433272 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652166656 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1013 21:18:18.353393  230941 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1013 21:18:18.353984  230941 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1013 21:18:18.354186  230941 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1013 21:18:18.356119  230941 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-941848 host does not exist
	  To start a cluster, run: "minikube start -p download-only-941848"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-941848
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (4.47s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-318241 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-318241 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.467618038s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (4.47s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1013 21:18:27.107417  230929 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1013 21:18:27.107465  230929 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-226873/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-318241
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-318241: exit status 85 (66.22726ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-941848 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-941848 │ jenkins │ v1.37.0 │ 13 Oct 25 21:18 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 13 Oct 25 21:18 UTC │ 13 Oct 25 21:18 UTC │
	│ delete  │ -p download-only-941848                                                                                                                                                   │ download-only-941848 │ jenkins │ v1.37.0 │ 13 Oct 25 21:18 UTC │ 13 Oct 25 21:18 UTC │
	│ start   │ -o=json --download-only -p download-only-318241 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-318241 │ jenkins │ v1.37.0 │ 13 Oct 25 21:18 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 21:18:22
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 21:18:22.684245  231294 out.go:360] Setting OutFile to fd 1 ...
	I1013 21:18:22.684518  231294 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:18:22.684528  231294 out.go:374] Setting ErrFile to fd 2...
	I1013 21:18:22.684532  231294 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:18:22.684793  231294 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-226873/.minikube/bin
	I1013 21:18:22.685384  231294 out.go:368] Setting JSON to true
	I1013 21:18:22.686292  231294 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3651,"bootTime":1760386652,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1013 21:18:22.686383  231294 start.go:141] virtualization: kvm guest
	I1013 21:18:22.688567  231294 out.go:99] [download-only-318241] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1013 21:18:22.688713  231294 notify.go:220] Checking for updates...
	I1013 21:18:22.690275  231294 out.go:171] MINIKUBE_LOCATION=21724
	I1013 21:18:22.691792  231294 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 21:18:22.693121  231294 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21724-226873/kubeconfig
	I1013 21:18:22.694505  231294 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-226873/.minikube
	I1013 21:18:22.696044  231294 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1013 21:18:22.698516  231294 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1013 21:18:22.698829  231294 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 21:18:22.722515  231294 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1013 21:18:22.722628  231294 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 21:18:22.783206  231294 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:46 SystemTime:2025-10-13 21:18:22.772856074 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652166656 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1013 21:18:22.783369  231294 docker.go:318] overlay module found
	I1013 21:18:22.785416  231294 out.go:99] Using the docker driver based on user configuration
	I1013 21:18:22.785452  231294 start.go:305] selected driver: docker
	I1013 21:18:22.785458  231294 start.go:925] validating driver "docker" against <nil>
	I1013 21:18:22.785547  231294 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 21:18:22.849112  231294 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:46 SystemTime:2025-10-13 21:18:22.838614308 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652166656 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1013 21:18:22.849280  231294 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1013 21:18:22.849776  231294 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1013 21:18:22.849938  231294 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1013 21:18:22.852496  231294 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-318241 host does not exist
	  To start a cluster, run: "minikube start -p download-only-318241"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-318241
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.4s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-704567 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-704567" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-704567
--- PASS: TestDownloadOnlyKic (0.40s)

                                                
                                    
x
+
TestBinaryMirror (0.83s)

                                                
                                                
=== RUN   TestBinaryMirror
I1013 21:18:28.212441  230929 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-784602 --alsologtostderr --binary-mirror http://127.0.0.1:46779 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-784602" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-784602
--- PASS: TestBinaryMirror (0.83s)

                                                
                                    
x
+
TestOffline (62.38s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-932435 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-932435 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (59.92323447s)
helpers_test.go:175: Cleaning up "offline-crio-932435" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-932435
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-932435: (2.453350715s)
--- PASS: TestOffline (62.38s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-143775
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-143775: exit status 85 (62.212701ms)

                                                
                                                
-- stdout --
	* Profile "addons-143775" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-143775"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-143775
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-143775: exit status 85 (61.601072ms)

                                                
                                                
-- stdout --
	* Profile "addons-143775" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-143775"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (153.04s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-143775 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-143775 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m33.037527814s)
--- PASS: TestAddons/Setup (153.04s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-143775 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-143775 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (7.44s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-143775 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-143775 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [c963d75c-856f-4d71-8188-3a63254f88b8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [c963d75c-856f-4d71-8188-3a63254f88b8] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 7.003968246s
addons_test.go:694: (dbg) Run:  kubectl --context addons-143775 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-143775 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-143775 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (7.44s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (16.71s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-143775
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-143775: (16.442179979s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-143775
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-143775
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-143775
--- PASS: TestAddons/StoppedEnableDisable (16.71s)

                                                
                                    
x
+
TestCertOptions (31.14s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-442906 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-442906 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (27.89725933s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-442906 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-442906 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-442906 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-442906" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-442906
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-442906: (2.52402691s)
--- PASS: TestCertOptions (31.14s)

                                                
                                    
x
+
TestCertExpiration (215.82s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-894101 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-894101 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (24.796248561s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-894101 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-894101 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (8.53682751s)
helpers_test.go:175: Cleaning up "cert-expiration-894101" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-894101
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-894101: (2.490141317s)
--- PASS: TestCertExpiration (215.82s)

                                                
                                    
x
+
TestForceSystemdFlag (24.36s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-886102 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-886102 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (21.258289292s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-886102 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-886102" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-886102
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-886102: (2.806753158s)
--- PASS: TestForceSystemdFlag (24.36s)

                                                
                                    
x
+
TestForceSystemdEnv (27.92s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-010902 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-010902 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (25.442891925s)
helpers_test.go:175: Cleaning up "force-systemd-env-010902" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-010902
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-010902: (2.478840047s)
--- PASS: TestForceSystemdEnv (27.92s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.04s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I1013 22:00:43.629026  230929 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1013 22:00:43.629203  230929 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate2433372619/001:/home/jenkins/workspace/Docker_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1013 22:00:43.659071  230929 install.go:163] /tmp/TestKVMDriverInstallOrUpdate2433372619/001/docker-machine-driver-kvm2 version is 1.1.1
W1013 22:00:43.659110  230929 install.go:76] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.37.0
W1013 22:00:43.659221  230929 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1013 22:00:43.659263  230929 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2433372619/001/docker-machine-driver-kvm2
I1013 22:00:44.518354  230929 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate2433372619/001:/home/jenkins/workspace/Docker_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1013 22:00:44.537130  230929 install.go:163] /tmp/TestKVMDriverInstallOrUpdate2433372619/001/docker-machine-driver-kvm2 version is 1.37.0
--- PASS: TestKVMDriverInstallOrUpdate (1.04s)

                                                
                                    
x
+
TestErrorSpam/setup (24.05s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-426363 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-426363 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-426363 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-426363 --driver=docker  --container-runtime=crio: (24.045887875s)
--- PASS: TestErrorSpam/setup (24.05s)

                                                
                                    
x
+
TestErrorSpam/start (0.66s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-426363 --log_dir /tmp/nospam-426363 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-426363 --log_dir /tmp/nospam-426363 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-426363 --log_dir /tmp/nospam-426363 start --dry-run
--- PASS: TestErrorSpam/start (0.66s)

                                                
                                    
x
+
TestErrorSpam/status (0.94s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-426363 --log_dir /tmp/nospam-426363 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-426363 --log_dir /tmp/nospam-426363 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-426363 --log_dir /tmp/nospam-426363 status
--- PASS: TestErrorSpam/status (0.94s)

                                                
                                    
x
+
TestErrorSpam/pause (6.17s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-426363 --log_dir /tmp/nospam-426363 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-426363 --log_dir /tmp/nospam-426363 pause: exit status 80 (2.241938458s)

                                                
                                                
-- stdout --
	* Pausing node nospam-426363 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T21:24:32Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-426363 --log_dir /tmp/nospam-426363 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-426363 --log_dir /tmp/nospam-426363 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-426363 --log_dir /tmp/nospam-426363 pause: exit status 80 (2.387918256s)

                                                
                                                
-- stdout --
	* Pausing node nospam-426363 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T21:24:34Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-426363 --log_dir /tmp/nospam-426363 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-426363 --log_dir /tmp/nospam-426363 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-426363 --log_dir /tmp/nospam-426363 pause: exit status 80 (1.54295912s)

                                                
                                                
-- stdout --
	* Pausing node nospam-426363 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T21:24:36Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-426363 --log_dir /tmp/nospam-426363 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (6.17s)

                                                
                                    
x
+
TestErrorSpam/unpause (6.01s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-426363 --log_dir /tmp/nospam-426363 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-426363 --log_dir /tmp/nospam-426363 unpause: exit status 80 (2.236301803s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-426363 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T21:24:38Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-426363 --log_dir /tmp/nospam-426363 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-426363 --log_dir /tmp/nospam-426363 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-426363 --log_dir /tmp/nospam-426363 unpause: exit status 80 (1.936948752s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-426363 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T21:24:40Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-426363 --log_dir /tmp/nospam-426363 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-426363 --log_dir /tmp/nospam-426363 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-426363 --log_dir /tmp/nospam-426363 unpause: exit status 80 (1.834731007s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-426363 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T21:24:42Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-426363 --log_dir /tmp/nospam-426363 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (6.01s)

                                                
                                    
x
+
TestErrorSpam/stop (2.59s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-426363 --log_dir /tmp/nospam-426363 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-426363 --log_dir /tmp/nospam-426363 stop: (2.405828727s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-426363 --log_dir /tmp/nospam-426363 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-426363 --log_dir /tmp/nospam-426363 stop
--- PASS: TestErrorSpam/stop (2.59s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21724-226873/.minikube/files/etc/test/nested/copy/230929/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (36.76s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-412292 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-412292 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (36.757024571s)
--- PASS: TestFunctional/serial/StartWithProxy (36.76s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.35s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1013 21:25:26.715880  230929 config.go:182] Loaded profile config "functional-412292": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-412292 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-412292 --alsologtostderr -v=8: (6.349869204s)
functional_test.go:678: soft start took 6.350610556s for "functional-412292" cluster.
I1013 21:25:33.066137  230929 config.go:182] Loaded profile config "functional-412292": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (6.35s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-412292 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-412292 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-412292 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-412292 cache add registry.k8s.io/pause:3.3: (1.148606965s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-412292 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-412292 /tmp/TestFunctionalserialCacheCmdcacheadd_local3644316787/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-412292 cache add minikube-local-cache-test:functional-412292
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-412292 cache delete minikube-local-cache-test:functional-412292
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-412292
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.16s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-412292 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.56s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-412292 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-412292 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-412292 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (278.130316ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-412292 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-412292 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.56s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-412292 kubectl -- --context functional-412292 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-412292 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (68s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-412292 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1013 21:26:02.834666  230929 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/addons-143775/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 21:26:02.841152  230929 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/addons-143775/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 21:26:02.852553  230929 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/addons-143775/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 21:26:02.874013  230929 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/addons-143775/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 21:26:02.915466  230929 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/addons-143775/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 21:26:02.997049  230929 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/addons-143775/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 21:26:03.158627  230929 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/addons-143775/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 21:26:03.480179  230929 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/addons-143775/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 21:26:04.122266  230929 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/addons-143775/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 21:26:05.403895  230929 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/addons-143775/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 21:26:07.966841  230929 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/addons-143775/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 21:26:13.088600  230929 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/addons-143775/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 21:26:23.330329  230929 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/addons-143775/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 21:26:43.812216  230929 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/addons-143775/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-412292 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m7.995493931s)
functional_test.go:776: restart took 1m7.995653856s for "functional-412292" cluster.
I1013 21:26:47.709734  230929 config.go:182] Loaded profile config "functional-412292": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (68.00s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-412292 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.28s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-412292 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-412292 logs: (1.274981423s)
--- PASS: TestFunctional/serial/LogsCmd (1.28s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.28s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-412292 logs --file /tmp/TestFunctionalserialLogsFileCmd1584877104/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-412292 logs --file /tmp/TestFunctionalserialLogsFileCmd1584877104/001/logs.txt: (1.279028945s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.28s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.76s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-412292 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-412292
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-412292: exit status 115 (338.057654ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31115 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-412292 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.76s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-412292 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-412292 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-412292 config get cpus: exit status 14 (74.109004ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-412292 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-412292 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-412292 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-412292 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-412292 config get cpus: exit status 14 (52.906316ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-412292 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-412292 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 270028: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.96s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-412292 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-412292 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (162.839674ms)

                                                
                                                
-- stdout --
	* [functional-412292] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21724
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21724-226873/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-226873/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 21:27:19.000489  269002 out.go:360] Setting OutFile to fd 1 ...
	I1013 21:27:19.000789  269002 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:27:19.000806  269002 out.go:374] Setting ErrFile to fd 2...
	I1013 21:27:19.000810  269002 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:27:19.001056  269002 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-226873/.minikube/bin
	I1013 21:27:19.001543  269002 out.go:368] Setting JSON to false
	I1013 21:27:19.002626  269002 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4187,"bootTime":1760386652,"procs":239,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1013 21:27:19.002724  269002 start.go:141] virtualization: kvm guest
	I1013 21:27:19.005112  269002 out.go:179] * [functional-412292] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1013 21:27:19.006564  269002 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 21:27:19.006578  269002 notify.go:220] Checking for updates...
	I1013 21:27:19.009299  269002 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 21:27:19.010781  269002 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-226873/kubeconfig
	I1013 21:27:19.012158  269002 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-226873/.minikube
	I1013 21:27:19.013459  269002 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1013 21:27:19.015167  269002 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 21:27:19.017241  269002 config.go:182] Loaded profile config "functional-412292": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:27:19.017905  269002 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 21:27:19.042511  269002 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1013 21:27:19.042596  269002 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 21:27:19.103727  269002 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-10-13 21:27:19.093112919 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652166656 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1013 21:27:19.103840  269002 docker.go:318] overlay module found
	I1013 21:27:19.105575  269002 out.go:179] * Using the docker driver based on existing profile
	I1013 21:27:19.106906  269002 start.go:305] selected driver: docker
	I1013 21:27:19.106922  269002 start.go:925] validating driver "docker" against &{Name:functional-412292 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-412292 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 21:27:19.107061  269002 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 21:27:19.108948  269002 out.go:203] 
	W1013 21:27:19.110117  269002 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1013 21:27:19.111515  269002 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-412292 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-412292 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-412292 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (180.974144ms)

                                                
                                                
-- stdout --
	* [functional-412292] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21724
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21724-226873/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-226873/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 21:27:19.398134  269250 out.go:360] Setting OutFile to fd 1 ...
	I1013 21:27:19.398277  269250 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:27:19.398291  269250 out.go:374] Setting ErrFile to fd 2...
	I1013 21:27:19.398298  269250 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:27:19.398774  269250 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-226873/.minikube/bin
	I1013 21:27:19.399465  269250 out.go:368] Setting JSON to false
	I1013 21:27:19.400801  269250 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4187,"bootTime":1760386652,"procs":240,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1013 21:27:19.400957  269250 start.go:141] virtualization: kvm guest
	I1013 21:27:19.403044  269250 out.go:179] * [functional-412292] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1013 21:27:19.404349  269250 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 21:27:19.404349  269250 notify.go:220] Checking for updates...
	I1013 21:27:19.407173  269250 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 21:27:19.408617  269250 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-226873/kubeconfig
	I1013 21:27:19.412531  269250 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-226873/.minikube
	I1013 21:27:19.413801  269250 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1013 21:27:19.415143  269250 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 21:27:19.416926  269250 config.go:182] Loaded profile config "functional-412292": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:27:19.417447  269250 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 21:27:19.444278  269250 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1013 21:27:19.444382  269250 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 21:27:19.510061  269250 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-10-13 21:27:19.497446178 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652166656 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1013 21:27:19.510218  269250 docker.go:318] overlay module found
	I1013 21:27:19.512414  269250 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1013 21:27:19.513778  269250 start.go:305] selected driver: docker
	I1013 21:27:19.513797  269250 start.go:925] validating driver "docker" against &{Name:functional-412292 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-412292 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 21:27:19.513921  269250 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 21:27:19.515965  269250 out.go:203] 
	W1013 21:27:19.517631  269250 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1013 21:27:19.518903  269250 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-412292 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-412292 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-412292 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-412292 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-412292 addons list -o json
I1013 21:27:00.566519  230929 detect.go:223] nested VM detected
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (24.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [b3c542b3-ff7f-462d-8453-dd205cf0c3fa] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.054761006s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-412292 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-412292 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-412292 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-412292 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [668f6314-610d-455e-998f-21282fa7e499] Pending
helpers_test.go:352: "sp-pod" [668f6314-610d-455e-998f-21282fa7e499] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [668f6314-610d-455e-998f-21282fa7e499] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.003871573s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-412292 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-412292 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-412292 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [750de011-050a-4adc-97b6-d426f309a83a] Pending
helpers_test.go:352: "sp-pod" [750de011-050a-4adc-97b6-d426f309a83a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [750de011-050a-4adc-97b6-d426f309a83a] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.00462785s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-412292 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (24.26s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-412292 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-412292 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-412292 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-412292 ssh -n functional-412292 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-412292 cp functional-412292:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1987802676/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-412292 ssh -n functional-412292 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-412292 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-412292 ssh -n functional-412292 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.73s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (21.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-412292 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-hhck9" [2b673ece-9090-4d70-bdc6-93c49f1f439e] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
E1013 21:27:24.773574  230929 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/addons-143775/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
2025/10/13 21:27:28 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
helpers_test.go:352: "mysql-5bb876957f-hhck9" [2b673ece-9090-4d70-bdc6-93c49f1f439e] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 17.003556383s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-412292 exec mysql-5bb876957f-hhck9 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-412292 exec mysql-5bb876957f-hhck9 -- mysql -ppassword -e "show databases;": exit status 1 (130.437151ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1013 21:27:38.733820  230929 retry.go:31] will retry after 639.234095ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-412292 exec mysql-5bb876957f-hhck9 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-412292 exec mysql-5bb876957f-hhck9 -- mysql -ppassword -e "show databases;": exit status 1 (101.517361ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1013 21:27:39.475396  230929 retry.go:31] will retry after 1.670976218s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-412292 exec mysql-5bb876957f-hhck9 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-412292 exec mysql-5bb876957f-hhck9 -- mysql -ppassword -e "show databases;": exit status 1 (92.335528ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1013 21:27:41.239591  230929 retry.go:31] will retry after 1.369788124s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-412292 exec mysql-5bb876957f-hhck9 -- mysql -ppassword -e "show databases;"
E1013 21:28:46.695325  230929 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/addons-143775/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 21:31:02.825029  230929 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/addons-143775/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 21:31:30.537067  230929 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/addons-143775/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 21:36:02.825765  230929 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/addons-143775/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/MySQL (21.26s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/230929/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-412292 ssh "sudo cat /etc/test/nested/copy/230929/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/230929.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-412292 ssh "sudo cat /etc/ssl/certs/230929.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/230929.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-412292 ssh "sudo cat /usr/share/ca-certificates/230929.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-412292 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/2309292.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-412292 ssh "sudo cat /etc/ssl/certs/2309292.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/2309292.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-412292 ssh "sudo cat /usr/share/ca-certificates/2309292.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-412292 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.78s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-412292 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-412292 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-412292 ssh "sudo systemctl is-active docker": exit status 1 (296.186617ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-412292 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-412292 ssh "sudo systemctl is-active containerd": exit status 1 (286.450872ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-412292 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-412292 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-412292 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-412292 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ c80c8dbafe7dd │ 76MB   │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ fc25172553d79 │ 73.1MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ 7dd6aaa1717ab │ 53.8MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ gcr.io/k8s-minikube/busybox             │ latest             │ beae173ccac6a │ 1.46MB │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ c3994bc696102 │ 89MB   │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ docker.io/library/nginx                 │ alpine             │ 5e7abcdd20216 │ 54.2MB │
│ docker.io/library/nginx                 │ latest             │ 07ccdb7838758 │ 164MB  │
│ localhost/my-image                      │ functional-412292  │ 58b53214658bb │ 1.47MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-412292 image ls --format table --alsologtostderr:
I1013 21:27:35.318328  271714 out.go:360] Setting OutFile to fd 1 ...
I1013 21:27:35.318601  271714 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1013 21:27:35.318612  271714 out.go:374] Setting ErrFile to fd 2...
I1013 21:27:35.318616  271714 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1013 21:27:35.318822  271714 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-226873/.minikube/bin
I1013 21:27:35.319439  271714 config.go:182] Loaded profile config "functional-412292": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1013 21:27:35.319534  271714 config.go:182] Loaded profile config "functional-412292": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1013 21:27:35.319912  271714 cli_runner.go:164] Run: docker container inspect functional-412292 --format={{.State.Status}}
I1013 21:27:35.337349  271714 ssh_runner.go:195] Run: systemctl --version
I1013 21:27:35.337400  271714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-412292
I1013 21:27:35.354961  271714 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/functional-412292/id_rsa Username:docker}
I1013 21:27:35.452191  271714 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-412292 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-412292 image ls --format json --alsologtostderr:
[{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964","registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"89046001"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8
c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"7dd6aaa1717ab7eaae4578503
e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31","registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"53844823"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"07ccdb7838758e758a4d52a9761636c385125a327355c0c94a6acff9babff938","repoDigests":["dock
er.io/library/nginx@sha256:35fabd32a7582bed5da0a40f41fd4984df7ddff32f81cd6be4614d07240ec115","docker.io/library/nginx@sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6"],"repoTags":["docker.io/library/nginx:latest"],"size":"163615579"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"5e7abcdd2021
6bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5","repoDigests":["docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22","docker.io/library/nginx@sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54168570"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"58b53214658bb5ea6b82e7fe004261eb593f8bafeda20134f4749da1b07a2dff","repoDigests":["localhost/my-image@sha256:cd248c0721a97ba5cdba9ec252281b0f3f6b97fb32badf9bf751f40ff09d6376"],"repoTags":["localhost/my-image:functional-412292"],"size":"1468744"},{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3
e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89","registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"76004181"},{"id":"1aad877f497f644e71c16705b3d0ba1f1fa958f3223d68c631a111f62622da89","repoDigests":["docker.io/library/9ee7c4ad88e0056c8b238c08c046e8ed10097ff9de87c48f15eea04b9de31996-tmp@sha256:07b9c0d0569a505a57f388038ecceff58167792c7c663e895299e91c3a7547ed"],"repoTags":[],"size":"1466132"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"fc25172553d79197
ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a","registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"73138073"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-412292 image ls --format json --alsologtostderr:
I1013 21:27:35.101382  271660 out.go:360] Setting OutFile to fd 1 ...
I1013 21:27:35.101486  271660 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1013 21:27:35.101496  271660 out.go:374] Setting ErrFile to fd 2...
I1013 21:27:35.101503  271660 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1013 21:27:35.101699  271660 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-226873/.minikube/bin
I1013 21:27:35.102328  271660 config.go:182] Loaded profile config "functional-412292": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1013 21:27:35.102423  271660 config.go:182] Loaded profile config "functional-412292": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1013 21:27:35.102801  271660 cli_runner.go:164] Run: docker container inspect functional-412292 --format={{.State.Status}}
I1013 21:27:35.121538  271660 ssh_runner.go:195] Run: systemctl --version
I1013 21:27:35.121590  271660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-412292
I1013 21:27:35.139039  271660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/functional-412292/id_rsa Username:docker}
I1013 21:27:35.236208  271660 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-412292 image ls --format yaml --alsologtostderr
functional_test.go:276: (dbg) Done: out/minikube-linux-amd64 -p functional-412292 image ls --format yaml --alsologtostderr: (1.398339205s)
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-412292 image ls --format yaml --alsologtostderr:
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "53844823"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5
repoDigests:
- docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22
- docker.io/library/nginx@sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e
repoTags:
- docker.io/library/nginx:alpine
size: "54168570"
- id: 07ccdb7838758e758a4d52a9761636c385125a327355c0c94a6acff9babff938
repoDigests:
- docker.io/library/nginx@sha256:35fabd32a7582bed5da0a40f41fd4984df7ddff32f81cd6be4614d07240ec115
- docker.io/library/nginx@sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6
repoTags:
- docker.io/library/nginx:latest
size: "163615579"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "89046001"
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
- registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "73138073"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
- registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "76004181"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-412292 image ls --format yaml --alsologtostderr:
I1013 21:27:31.524385  270916 out.go:360] Setting OutFile to fd 1 ...
I1013 21:27:31.524724  270916 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1013 21:27:31.524737  270916 out.go:374] Setting ErrFile to fd 2...
I1013 21:27:31.524742  270916 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1013 21:27:31.525106  270916 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-226873/.minikube/bin
I1013 21:27:31.526037  270916 config.go:182] Loaded profile config "functional-412292": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1013 21:27:31.526204  270916 config.go:182] Loaded profile config "functional-412292": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1013 21:27:31.526818  270916 cli_runner.go:164] Run: docker container inspect functional-412292 --format={{.State.Status}}
I1013 21:27:31.548586  270916 ssh_runner.go:195] Run: systemctl --version
I1013 21:27:31.548650  270916 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-412292
I1013 21:27:31.570312  270916 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/functional-412292/id_rsa Username:docker}
I1013 21:27:31.677573  270916 ssh_runner.go:195] Run: sudo crictl images --output json
I1013 21:27:32.848020  270916 ssh_runner.go:235] Completed: sudo crictl images --output json: (1.170380675s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (1.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-412292 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-412292 ssh pgrep buildkitd: exit status 1 (276.622673ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-412292 image build -t localhost/my-image:functional-412292 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-412292 image build -t localhost/my-image:functional-412292 testdata/build --alsologtostderr: (1.695747194s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-412292 image build -t localhost/my-image:functional-412292 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 1aad877f497
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-412292
--> 58b53214658
Successfully tagged localhost/my-image:functional-412292
58b53214658bb5ea6b82e7fe004261eb593f8bafeda20134f4749da1b07a2dff
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-412292 image build -t localhost/my-image:functional-412292 testdata/build --alsologtostderr:
I1013 21:27:33.181918  271135 out.go:360] Setting OutFile to fd 1 ...
I1013 21:27:33.182103  271135 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1013 21:27:33.182117  271135 out.go:374] Setting ErrFile to fd 2...
I1013 21:27:33.182123  271135 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1013 21:27:33.182463  271135 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-226873/.minikube/bin
I1013 21:27:33.183321  271135 config.go:182] Loaded profile config "functional-412292": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1013 21:27:33.184088  271135 config.go:182] Loaded profile config "functional-412292": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1013 21:27:33.184694  271135 cli_runner.go:164] Run: docker container inspect functional-412292 --format={{.State.Status}}
I1013 21:27:33.202809  271135 ssh_runner.go:195] Run: systemctl --version
I1013 21:27:33.202867  271135 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-412292
I1013 21:27:33.223148  271135 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/functional-412292/id_rsa Username:docker}
I1013 21:27:33.322797  271135 build_images.go:161] Building image from path: /tmp/build.4269243178.tar
I1013 21:27:33.322880  271135 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1013 21:27:33.331315  271135 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.4269243178.tar
I1013 21:27:33.335176  271135 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.4269243178.tar: stat -c "%s %y" /var/lib/minikube/build/build.4269243178.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.4269243178.tar': No such file or directory
I1013 21:27:33.335206  271135 ssh_runner.go:362] scp /tmp/build.4269243178.tar --> /var/lib/minikube/build/build.4269243178.tar (3072 bytes)
I1013 21:27:33.353730  271135 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.4269243178
I1013 21:27:33.361671  271135 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.4269243178 -xf /var/lib/minikube/build/build.4269243178.tar
I1013 21:27:33.369768  271135 crio.go:315] Building image: /var/lib/minikube/build/build.4269243178
I1013 21:27:33.369838  271135 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-412292 /var/lib/minikube/build/build.4269243178 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1013 21:27:34.803572  271135 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-412292 /var/lib/minikube/build/build.4269243178 --cgroup-manager=cgroupfs: (1.433704586s)
I1013 21:27:34.803660  271135 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.4269243178
I1013 21:27:34.812541  271135 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.4269243178.tar
I1013 21:27:34.820703  271135 build_images.go:217] Built localhost/my-image:functional-412292 from /tmp/build.4269243178.tar
I1013 21:27:34.820740  271135 build_images.go:133] succeeded building to: functional-412292
I1013 21:27:34.820746  271135 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-412292 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-412292
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.99s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-412292 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-412292 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-412292 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 264444: os: process already finished
helpers_test.go:525: unable to kill pid 264245: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-412292 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-412292 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-412292 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [e1a21052-0ea6-46c0-b8e3-9ea73aac61fb] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [e1a21052-0ea6-46c0-b8e3-9ea73aac61fb] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.004156623s
I1013 21:27:07.079947  230929 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-412292 image rm kicbase/echo-server:functional-412292 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-412292 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-412292 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.104.109.192 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-412292 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "330.820528ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "51.309468ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "329.65554ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "52.604547ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (5.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-412292 /tmp/TestFunctionalparallelMountCmdany-port396593887/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1760390829340377069" to /tmp/TestFunctionalparallelMountCmdany-port396593887/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1760390829340377069" to /tmp/TestFunctionalparallelMountCmdany-port396593887/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1760390829340377069" to /tmp/TestFunctionalparallelMountCmdany-port396593887/001/test-1760390829340377069
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-412292 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-412292 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (275.552542ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1013 21:27:09.616209  230929 retry.go:31] will retry after 661.332508ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-412292 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-412292 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 13 21:27 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 13 21:27 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 13 21:27 test-1760390829340377069
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-412292 ssh cat /mount-9p/test-1760390829340377069
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-412292 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [e99e963e-7d69-4cfe-b5a4-d0ea36779e10] Pending
helpers_test.go:352: "busybox-mount" [e99e963e-7d69-4cfe-b5a4-d0ea36779e10] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
I1013 21:27:12.280635  230929 detect.go:223] nested VM detected
helpers_test.go:352: "busybox-mount" [e99e963e-7d69-4cfe-b5a4-d0ea36779e10] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [e99e963e-7d69-4cfe-b5a4-d0ea36779e10] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 3.003121611s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-412292 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-412292 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-412292 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-412292 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-412292 /tmp/TestFunctionalparallelMountCmdany-port396593887/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (5.84s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-412292 /tmp/TestFunctionalparallelMountCmdspecific-port2546608781/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-412292 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-412292 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (280.116177ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1013 21:27:15.463380  230929 retry.go:31] will retry after 563.463279ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-412292 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-412292 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-412292 /tmp/TestFunctionalparallelMountCmdspecific-port2546608781/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-412292 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-412292 ssh "sudo umount -f /mount-9p": exit status 1 (267.718521ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-412292 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-412292 /tmp/TestFunctionalparallelMountCmdspecific-port2546608781/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.85s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-412292 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2128528788/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-412292 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2128528788/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-412292 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2128528788/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-412292 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-412292 ssh "findmnt -T" /mount1: exit status 1 (338.844731ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1013 21:27:17.375569  230929 retry.go:31] will retry after 739.642203ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-412292 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-412292 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-412292 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-412292 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-412292 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2128528788/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-412292 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2128528788/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-412292 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2128528788/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.92s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-412292 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-412292 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-412292 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-412292 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-412292 service list: (1.700133277s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-412292 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-412292 service list -o json: (1.692585239s)
functional_test.go:1504: Took "1.692683718s" to run "out/minikube-linux-amd64 -p functional-412292 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.69s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-412292
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-412292
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-412292
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (133.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-631968 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-631968 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (2m12.666562744s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-631968 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (133.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-631968 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-631968 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-631968 kubectl -- rollout status deployment/busybox: (4.122306581s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-631968 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-631968 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-631968 kubectl -- exec busybox-7b57f96db7-nwmmj -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-631968 kubectl -- exec busybox-7b57f96db7-rwphh -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-631968 kubectl -- exec busybox-7b57f96db7-wl7z4 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-631968 kubectl -- exec busybox-7b57f96db7-nwmmj -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-631968 kubectl -- exec busybox-7b57f96db7-rwphh -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-631968 kubectl -- exec busybox-7b57f96db7-wl7z4 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-631968 kubectl -- exec busybox-7b57f96db7-nwmmj -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-631968 kubectl -- exec busybox-7b57f96db7-rwphh -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-631968 kubectl -- exec busybox-7b57f96db7-wl7z4 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-631968 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-631968 kubectl -- exec busybox-7b57f96db7-nwmmj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-631968 kubectl -- exec busybox-7b57f96db7-nwmmj -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-631968 kubectl -- exec busybox-7b57f96db7-rwphh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-631968 kubectl -- exec busybox-7b57f96db7-rwphh -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-631968 kubectl -- exec busybox-7b57f96db7-wl7z4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-631968 kubectl -- exec busybox-7b57f96db7-wl7z4 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (0.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (24.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-631968 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-631968 node add --alsologtostderr -v 5: (23.388224516s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-631968 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (24.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-631968 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (17.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-631968 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-631968 cp testdata/cp-test.txt ha-631968:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-631968 ssh -n ha-631968 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-631968 cp ha-631968:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1920877207/001/cp-test_ha-631968.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-631968 ssh -n ha-631968 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-631968 cp ha-631968:/home/docker/cp-test.txt ha-631968-m02:/home/docker/cp-test_ha-631968_ha-631968-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-631968 ssh -n ha-631968 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-631968 ssh -n ha-631968-m02 "sudo cat /home/docker/cp-test_ha-631968_ha-631968-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-631968 cp ha-631968:/home/docker/cp-test.txt ha-631968-m03:/home/docker/cp-test_ha-631968_ha-631968-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-631968 ssh -n ha-631968 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-631968 ssh -n ha-631968-m03 "sudo cat /home/docker/cp-test_ha-631968_ha-631968-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-631968 cp ha-631968:/home/docker/cp-test.txt ha-631968-m04:/home/docker/cp-test_ha-631968_ha-631968-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-631968 ssh -n ha-631968 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-631968 ssh -n ha-631968-m04 "sudo cat /home/docker/cp-test_ha-631968_ha-631968-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-631968 cp testdata/cp-test.txt ha-631968-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-631968 ssh -n ha-631968-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-631968 cp ha-631968-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1920877207/001/cp-test_ha-631968-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-631968 ssh -n ha-631968-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-631968 cp ha-631968-m02:/home/docker/cp-test.txt ha-631968:/home/docker/cp-test_ha-631968-m02_ha-631968.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-631968 ssh -n ha-631968-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-631968 ssh -n ha-631968 "sudo cat /home/docker/cp-test_ha-631968-m02_ha-631968.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-631968 cp ha-631968-m02:/home/docker/cp-test.txt ha-631968-m03:/home/docker/cp-test_ha-631968-m02_ha-631968-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-631968 ssh -n ha-631968-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-631968 ssh -n ha-631968-m03 "sudo cat /home/docker/cp-test_ha-631968-m02_ha-631968-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-631968 cp ha-631968-m02:/home/docker/cp-test.txt ha-631968-m04:/home/docker/cp-test_ha-631968-m02_ha-631968-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-631968 ssh -n ha-631968-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-631968 ssh -n ha-631968-m04 "sudo cat /home/docker/cp-test_ha-631968-m02_ha-631968-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-631968 cp testdata/cp-test.txt ha-631968-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-631968 ssh -n ha-631968-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-631968 cp ha-631968-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1920877207/001/cp-test_ha-631968-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-631968 ssh -n ha-631968-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-631968 cp ha-631968-m03:/home/docker/cp-test.txt ha-631968:/home/docker/cp-test_ha-631968-m03_ha-631968.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-631968 ssh -n ha-631968-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-631968 ssh -n ha-631968 "sudo cat /home/docker/cp-test_ha-631968-m03_ha-631968.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-631968 cp ha-631968-m03:/home/docker/cp-test.txt ha-631968-m02:/home/docker/cp-test_ha-631968-m03_ha-631968-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-631968 ssh -n ha-631968-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-631968 ssh -n ha-631968-m02 "sudo cat /home/docker/cp-test_ha-631968-m03_ha-631968-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-631968 cp ha-631968-m03:/home/docker/cp-test.txt ha-631968-m04:/home/docker/cp-test_ha-631968-m03_ha-631968-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-631968 ssh -n ha-631968-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-631968 ssh -n ha-631968-m04 "sudo cat /home/docker/cp-test_ha-631968-m03_ha-631968-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-631968 cp testdata/cp-test.txt ha-631968-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-631968 ssh -n ha-631968-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-631968 cp ha-631968-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1920877207/001/cp-test_ha-631968-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-631968 ssh -n ha-631968-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-631968 cp ha-631968-m04:/home/docker/cp-test.txt ha-631968:/home/docker/cp-test_ha-631968-m04_ha-631968.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-631968 ssh -n ha-631968-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-631968 ssh -n ha-631968 "sudo cat /home/docker/cp-test_ha-631968-m04_ha-631968.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-631968 cp ha-631968-m04:/home/docker/cp-test.txt ha-631968-m02:/home/docker/cp-test_ha-631968-m04_ha-631968-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-631968 ssh -n ha-631968-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-631968 ssh -n ha-631968-m02 "sudo cat /home/docker/cp-test_ha-631968-m04_ha-631968-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-631968 cp ha-631968-m04:/home/docker/cp-test.txt ha-631968-m03:/home/docker/cp-test_ha-631968-m04_ha-631968-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-631968 ssh -n ha-631968-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-631968 ssh -n ha-631968-m03 "sudo cat /home/docker/cp-test_ha-631968-m04_ha-631968-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (17.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (13.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-631968 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-631968 node stop m02 --alsologtostderr -v 5: (12.628915906s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-631968 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-631968 status --alsologtostderr -v 5: exit status 7 (716.38604ms)

                                                
                                                
-- stdout --
	ha-631968
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-631968-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-631968-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-631968-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 21:40:22.662543  295746 out.go:360] Setting OutFile to fd 1 ...
	I1013 21:40:22.662672  295746 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:40:22.662683  295746 out.go:374] Setting ErrFile to fd 2...
	I1013 21:40:22.662687  295746 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:40:22.662917  295746 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-226873/.minikube/bin
	I1013 21:40:22.663184  295746 out.go:368] Setting JSON to false
	I1013 21:40:22.663224  295746 mustload.go:65] Loading cluster: ha-631968
	I1013 21:40:22.663333  295746 notify.go:220] Checking for updates...
	I1013 21:40:22.663775  295746 config.go:182] Loaded profile config "ha-631968": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:40:22.663794  295746 status.go:174] checking status of ha-631968 ...
	I1013 21:40:22.664366  295746 cli_runner.go:164] Run: docker container inspect ha-631968 --format={{.State.Status}}
	I1013 21:40:22.683449  295746 status.go:371] ha-631968 host status = "Running" (err=<nil>)
	I1013 21:40:22.683494  295746 host.go:66] Checking if "ha-631968" exists ...
	I1013 21:40:22.683791  295746 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-631968
	I1013 21:40:22.703212  295746 host.go:66] Checking if "ha-631968" exists ...
	I1013 21:40:22.703570  295746 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1013 21:40:22.703631  295746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-631968
	I1013 21:40:22.722823  295746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/ha-631968/id_rsa Username:docker}
	I1013 21:40:22.820881  295746 ssh_runner.go:195] Run: systemctl --version
	I1013 21:40:22.827729  295746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 21:40:22.841400  295746 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 21:40:22.906018  295746 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-13 21:40:22.894908296 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652166656 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1013 21:40:22.906852  295746 kubeconfig.go:125] found "ha-631968" server: "https://192.168.49.254:8443"
	I1013 21:40:22.906897  295746 api_server.go:166] Checking apiserver status ...
	I1013 21:40:22.906948  295746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 21:40:22.920057  295746 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1251/cgroup
	W1013 21:40:22.929518  295746 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1251/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1013 21:40:22.929578  295746 ssh_runner.go:195] Run: ls
	I1013 21:40:22.934472  295746 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1013 21:40:22.938966  295746 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1013 21:40:22.939011  295746 status.go:463] ha-631968 apiserver status = Running (err=<nil>)
	I1013 21:40:22.939033  295746 status.go:176] ha-631968 status: &{Name:ha-631968 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1013 21:40:22.939060  295746 status.go:174] checking status of ha-631968-m02 ...
	I1013 21:40:22.939341  295746 cli_runner.go:164] Run: docker container inspect ha-631968-m02 --format={{.State.Status}}
	I1013 21:40:22.958158  295746 status.go:371] ha-631968-m02 host status = "Stopped" (err=<nil>)
	I1013 21:40:22.958185  295746 status.go:384] host is not running, skipping remaining checks
	I1013 21:40:22.958192  295746 status.go:176] ha-631968-m02 status: &{Name:ha-631968-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1013 21:40:22.958222  295746 status.go:174] checking status of ha-631968-m03 ...
	I1013 21:40:22.958512  295746 cli_runner.go:164] Run: docker container inspect ha-631968-m03 --format={{.State.Status}}
	I1013 21:40:22.976815  295746 status.go:371] ha-631968-m03 host status = "Running" (err=<nil>)
	I1013 21:40:22.976844  295746 host.go:66] Checking if "ha-631968-m03" exists ...
	I1013 21:40:22.977165  295746 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-631968-m03
	I1013 21:40:22.995596  295746 host.go:66] Checking if "ha-631968-m03" exists ...
	I1013 21:40:22.995911  295746 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1013 21:40:22.995958  295746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-631968-m03
	I1013 21:40:23.015085  295746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/ha-631968-m03/id_rsa Username:docker}
	I1013 21:40:23.113821  295746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 21:40:23.127234  295746 kubeconfig.go:125] found "ha-631968" server: "https://192.168.49.254:8443"
	I1013 21:40:23.127265  295746 api_server.go:166] Checking apiserver status ...
	I1013 21:40:23.127312  295746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 21:40:23.138850  295746 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1161/cgroup
	W1013 21:40:23.148286  295746 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1161/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1013 21:40:23.148363  295746 ssh_runner.go:195] Run: ls
	I1013 21:40:23.153168  295746 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1013 21:40:23.157849  295746 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1013 21:40:23.157871  295746 status.go:463] ha-631968-m03 apiserver status = Running (err=<nil>)
	I1013 21:40:23.157880  295746 status.go:176] ha-631968-m03 status: &{Name:ha-631968-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1013 21:40:23.157895  295746 status.go:174] checking status of ha-631968-m04 ...
	I1013 21:40:23.158170  295746 cli_runner.go:164] Run: docker container inspect ha-631968-m04 --format={{.State.Status}}
	I1013 21:40:23.176183  295746 status.go:371] ha-631968-m04 host status = "Running" (err=<nil>)
	I1013 21:40:23.176208  295746 host.go:66] Checking if "ha-631968-m04" exists ...
	I1013 21:40:23.176549  295746 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-631968-m04
	I1013 21:40:23.195006  295746 host.go:66] Checking if "ha-631968-m04" exists ...
	I1013 21:40:23.195266  295746 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1013 21:40:23.195303  295746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-631968-m04
	I1013 21:40:23.214031  295746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/ha-631968-m04/id_rsa Username:docker}
	I1013 21:40:23.311576  295746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 21:40:23.324551  295746 status.go:176] ha-631968-m04 status: &{Name:ha-631968-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (13.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (14.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-631968 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-631968 node start m02 --alsologtostderr -v 5: (13.24015197s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-631968 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (14.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (117.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-631968 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-631968 stop --alsologtostderr -v 5
E1013 21:41:02.826276  230929 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/addons-143775/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-631968 stop --alsologtostderr -v 5: (54.54471132s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-631968 start --wait true --alsologtostderr -v 5
E1013 21:41:54.285487  230929 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/functional-412292/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 21:41:54.291948  230929 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/functional-412292/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 21:41:54.304051  230929 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/functional-412292/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 21:41:54.326067  230929 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/functional-412292/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 21:41:54.367640  230929 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/functional-412292/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 21:41:54.449123  230929 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/functional-412292/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 21:41:54.610677  230929 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/functional-412292/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 21:41:54.932651  230929 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/functional-412292/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 21:41:55.574215  230929 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/functional-412292/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 21:41:56.856244  230929 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/functional-412292/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 21:41:59.418356  230929 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/functional-412292/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 21:42:04.540590  230929 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/functional-412292/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 21:42:14.782524  230929 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/functional-412292/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 21:42:25.899143  230929 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/addons-143775/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 21:42:35.264093  230929 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/functional-412292/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-631968 start --wait true --alsologtostderr -v 5: (1m2.529357658s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-631968 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (117.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-631968 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-631968 node delete m03 --alsologtostderr -v 5: (9.304061056s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-631968 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (41.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-631968 stop --alsologtostderr -v 5
E1013 21:43:16.226686  230929 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/functional-412292/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-631968 stop --alsologtostderr -v 5: (41.564573071s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-631968 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-631968 status --alsologtostderr -v 5: exit status 7 (113.937367ms)

                                                
                                                
-- stdout --
	ha-631968
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-631968-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-631968-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 21:43:28.793882  309921 out.go:360] Setting OutFile to fd 1 ...
	I1013 21:43:28.794377  309921 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:43:28.794389  309921 out.go:374] Setting ErrFile to fd 2...
	I1013 21:43:28.794394  309921 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:43:28.794626  309921 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-226873/.minikube/bin
	I1013 21:43:28.794835  309921 out.go:368] Setting JSON to false
	I1013 21:43:28.794869  309921 mustload.go:65] Loading cluster: ha-631968
	I1013 21:43:28.795008  309921 notify.go:220] Checking for updates...
	I1013 21:43:28.795399  309921 config.go:182] Loaded profile config "ha-631968": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:43:28.795418  309921 status.go:174] checking status of ha-631968 ...
	I1013 21:43:28.795860  309921 cli_runner.go:164] Run: docker container inspect ha-631968 --format={{.State.Status}}
	I1013 21:43:28.818009  309921 status.go:371] ha-631968 host status = "Stopped" (err=<nil>)
	I1013 21:43:28.818040  309921 status.go:384] host is not running, skipping remaining checks
	I1013 21:43:28.818050  309921 status.go:176] ha-631968 status: &{Name:ha-631968 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1013 21:43:28.818098  309921 status.go:174] checking status of ha-631968-m02 ...
	I1013 21:43:28.818435  309921 cli_runner.go:164] Run: docker container inspect ha-631968-m02 --format={{.State.Status}}
	I1013 21:43:28.836126  309921 status.go:371] ha-631968-m02 host status = "Stopped" (err=<nil>)
	I1013 21:43:28.836166  309921 status.go:384] host is not running, skipping remaining checks
	I1013 21:43:28.836177  309921 status.go:176] ha-631968-m02 status: &{Name:ha-631968-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1013 21:43:28.836204  309921 status.go:174] checking status of ha-631968-m04 ...
	I1013 21:43:28.836473  309921 cli_runner.go:164] Run: docker container inspect ha-631968-m04 --format={{.State.Status}}
	I1013 21:43:28.856406  309921 status.go:371] ha-631968-m04 host status = "Stopped" (err=<nil>)
	I1013 21:43:28.856428  309921 status.go:384] host is not running, skipping remaining checks
	I1013 21:43:28.856436  309921 status.go:176] ha-631968-m04 status: &{Name:ha-631968-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (41.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (56.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-631968 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-631968 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (55.961727524s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-631968 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (56.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (67.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-631968 node add --control-plane --alsologtostderr -v 5
E1013 21:44:38.148773  230929 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/functional-412292/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-631968 node add --control-plane --alsologtostderr -v 5: (1m6.193134668s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-631968 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (67.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.90s)

                                                
                                    
x
+
TestJSONOutput/start/Command (38.21s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-264006 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E1013 21:46:02.825559  230929 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/addons-143775/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-264006 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (38.207337245s)
--- PASS: TestJSONOutput/start/Command (38.21s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (8s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-264006 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-264006 --output=json --user=testUser: (7.995657828s)
--- PASS: TestJSONOutput/stop/Command (8.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-201124 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-201124 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (72.304999ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b20d1282-23f9-4b47-9d05-5dd9cd7292d8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-201124] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"75d21d53-1cb1-45b4-9015-f82c8bb5b8b9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21724"}}
	{"specversion":"1.0","id":"b591a41e-069c-4cc7-84d8-a38bb02baa4f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ec62ab7f-08eb-4c3f-9507-f511178e85f3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21724-226873/kubeconfig"}}
	{"specversion":"1.0","id":"e5c1215c-a758-4ff5-a544-2c6d827b44b5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-226873/.minikube"}}
	{"specversion":"1.0","id":"8a357301-c957-42d2-88cd-d69a25795076","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"a35f7928-4923-4da1-bcde-145cb338fc4b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"2411c3b9-93db-4a81-8ffd-c4e04d7a5e1a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-201124" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-201124
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (28.17s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-770493 --network=
E1013 21:46:54.285253  230929 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/functional-412292/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-770493 --network=: (25.983043712s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-770493" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-770493
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-770493: (2.167503554s)
--- PASS: TestKicCustomNetwork/create_custom_network (28.17s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (26.81s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-302955 --network=bridge
E1013 21:47:21.997217  230929 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/functional-412292/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-302955 --network=bridge: (24.786313663s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-302955" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-302955
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-302955: (1.999330385s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (26.81s)

                                                
                                    
x
+
TestKicExistingNetwork (25.1s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1013 21:47:32.323959  230929 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1013 21:47:32.342040  230929 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1013 21:47:32.342115  230929 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1013 21:47:32.342141  230929 cli_runner.go:164] Run: docker network inspect existing-network
W1013 21:47:32.358461  230929 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1013 21:47:32.358492  230929 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1013 21:47:32.358508  230929 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1013 21:47:32.358640  230929 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1013 21:47:32.376104  230929 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7d83a8e6a805 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ea:69:47:54:f9:98} reservation:<nil>}
I1013 21:47:32.376557  230929 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001b97020}
I1013 21:47:32.376592  230929 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1013 21:47:32.376647  230929 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1013 21:47:32.433742  230929 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-772022 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-772022 --network=existing-network: (22.936981248s)
helpers_test.go:175: Cleaning up "existing-network-772022" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-772022
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-772022: (2.01370077s)
I1013 21:47:57.402456  230929 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (25.10s)

                                                
                                    
x
+
TestKicCustomSubnet (24.65s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-230773 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-230773 --subnet=192.168.60.0/24: (22.463838157s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-230773 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-230773" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-230773
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-230773: (2.16424219s)
--- PASS: TestKicCustomSubnet (24.65s)

                                                
                                    
x
+
TestKicStaticIP (25.73s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-546159 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-546159 --static-ip=192.168.200.200: (23.414404588s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-546159 ip
helpers_test.go:175: Cleaning up "static-ip-546159" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-546159
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-546159: (2.178134039s)
--- PASS: TestKicStaticIP (25.73s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (45.73s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-938111 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-938111 --driver=docker  --container-runtime=crio: (19.422659751s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-942161 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-942161 --driver=docker  --container-runtime=crio: (20.291989794s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-938111
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-942161
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-942161" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-942161
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-942161: (2.394711534s)
helpers_test.go:175: Cleaning up "first-938111" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-938111
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-938111: (2.379380296s)
--- PASS: TestMinikubeProfile (45.73s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.46s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-469853 --memory=3072 --mount-string /tmp/TestMountStartserial1199977322/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-469853 --memory=3072 --mount-string /tmp/TestMountStartserial1199977322/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.460490477s)
--- PASS: TestMountStart/serial/StartWithMountFirst (5.46s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-469853 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.93s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-486173 --memory=3072 --mount-string /tmp/TestMountStartserial1199977322/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-486173 --memory=3072 --mount-string /tmp/TestMountStartserial1199977322/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.932933908s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.93s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-486173 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.7s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-469853 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-469853 --alsologtostderr -v=5: (1.702963433s)
--- PASS: TestMountStart/serial/DeleteFirst (1.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-486173 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.25s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-486173
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-486173: (1.249412359s)
--- PASS: TestMountStart/serial/Stop (1.25s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.25s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-486173
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-486173: (6.248743675s)
--- PASS: TestMountStart/serial/RestartStopped (7.25s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-486173 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (61.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-611399 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-611399 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (1m1.360462866s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-611399 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (61.84s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-611399 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-611399 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-611399 -- rollout status deployment/busybox: (2.10768459s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-611399 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-611399 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-611399 -- exec busybox-7b57f96db7-kqcf2 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-611399 -- exec busybox-7b57f96db7-q9ssf -- nslookup kubernetes.io
E1013 21:51:02.825123  230929 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/addons-143775/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-611399 -- exec busybox-7b57f96db7-kqcf2 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-611399 -- exec busybox-7b57f96db7-q9ssf -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-611399 -- exec busybox-7b57f96db7-kqcf2 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-611399 -- exec busybox-7b57f96db7-q9ssf -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.48s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-611399 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-611399 -- exec busybox-7b57f96db7-kqcf2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-611399 -- exec busybox-7b57f96db7-kqcf2 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-611399 -- exec busybox-7b57f96db7-q9ssf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-611399 -- exec busybox-7b57f96db7-q9ssf -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.67s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (24.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-611399 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-611399 -v=5 --alsologtostderr: (23.732225139s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-611399 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (24.37s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-611399 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.66s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-611399 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-611399 cp testdata/cp-test.txt multinode-611399:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-611399 ssh -n multinode-611399 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-611399 cp multinode-611399:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3592083902/001/cp-test_multinode-611399.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-611399 ssh -n multinode-611399 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-611399 cp multinode-611399:/home/docker/cp-test.txt multinode-611399-m02:/home/docker/cp-test_multinode-611399_multinode-611399-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-611399 ssh -n multinode-611399 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-611399 ssh -n multinode-611399-m02 "sudo cat /home/docker/cp-test_multinode-611399_multinode-611399-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-611399 cp multinode-611399:/home/docker/cp-test.txt multinode-611399-m03:/home/docker/cp-test_multinode-611399_multinode-611399-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-611399 ssh -n multinode-611399 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-611399 ssh -n multinode-611399-m03 "sudo cat /home/docker/cp-test_multinode-611399_multinode-611399-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-611399 cp testdata/cp-test.txt multinode-611399-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-611399 ssh -n multinode-611399-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-611399 cp multinode-611399-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3592083902/001/cp-test_multinode-611399-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-611399 ssh -n multinode-611399-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-611399 cp multinode-611399-m02:/home/docker/cp-test.txt multinode-611399:/home/docker/cp-test_multinode-611399-m02_multinode-611399.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-611399 ssh -n multinode-611399-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-611399 ssh -n multinode-611399 "sudo cat /home/docker/cp-test_multinode-611399-m02_multinode-611399.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-611399 cp multinode-611399-m02:/home/docker/cp-test.txt multinode-611399-m03:/home/docker/cp-test_multinode-611399-m02_multinode-611399-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-611399 ssh -n multinode-611399-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-611399 ssh -n multinode-611399-m03 "sudo cat /home/docker/cp-test_multinode-611399-m02_multinode-611399-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-611399 cp testdata/cp-test.txt multinode-611399-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-611399 ssh -n multinode-611399-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-611399 cp multinode-611399-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3592083902/001/cp-test_multinode-611399-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-611399 ssh -n multinode-611399-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-611399 cp multinode-611399-m03:/home/docker/cp-test.txt multinode-611399:/home/docker/cp-test_multinode-611399-m03_multinode-611399.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-611399 ssh -n multinode-611399-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-611399 ssh -n multinode-611399 "sudo cat /home/docker/cp-test_multinode-611399-m03_multinode-611399.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-611399 cp multinode-611399-m03:/home/docker/cp-test.txt multinode-611399-m02:/home/docker/cp-test_multinode-611399-m03_multinode-611399-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-611399 ssh -n multinode-611399-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-611399 ssh -n multinode-611399-m02 "sudo cat /home/docker/cp-test_multinode-611399-m03_multinode-611399-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.60s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-611399 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-611399 node stop m03: (1.260545491s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-611399 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-611399 status: exit status 7 (489.887032ms)

                                                
                                                
-- stdout --
	multinode-611399
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-611399-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-611399-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-611399 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-611399 status --alsologtostderr: exit status 7 (499.803404ms)

                                                
                                                
-- stdout --
	multinode-611399
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-611399-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-611399-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 21:51:40.589291  369419 out.go:360] Setting OutFile to fd 1 ...
	I1013 21:51:40.589567  369419 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:51:40.589575  369419 out.go:374] Setting ErrFile to fd 2...
	I1013 21:51:40.589580  369419 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:51:40.589803  369419 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-226873/.minikube/bin
	I1013 21:51:40.590009  369419 out.go:368] Setting JSON to false
	I1013 21:51:40.590042  369419 mustload.go:65] Loading cluster: multinode-611399
	I1013 21:51:40.590165  369419 notify.go:220] Checking for updates...
	I1013 21:51:40.590413  369419 config.go:182] Loaded profile config "multinode-611399": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:51:40.590429  369419 status.go:174] checking status of multinode-611399 ...
	I1013 21:51:40.590886  369419 cli_runner.go:164] Run: docker container inspect multinode-611399 --format={{.State.Status}}
	I1013 21:51:40.609003  369419 status.go:371] multinode-611399 host status = "Running" (err=<nil>)
	I1013 21:51:40.609034  369419 host.go:66] Checking if "multinode-611399" exists ...
	I1013 21:51:40.609285  369419 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-611399
	I1013 21:51:40.627042  369419 host.go:66] Checking if "multinode-611399" exists ...
	I1013 21:51:40.627304  369419 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1013 21:51:40.627362  369419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-611399
	I1013 21:51:40.645634  369419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/multinode-611399/id_rsa Username:docker}
	I1013 21:51:40.740732  369419 ssh_runner.go:195] Run: systemctl --version
	I1013 21:51:40.747307  369419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 21:51:40.760016  369419 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 21:51:40.819184  369419 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:65 SystemTime:2025-10-13 21:51:40.809486409 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652166656 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1013 21:51:40.819740  369419 kubeconfig.go:125] found "multinode-611399" server: "https://192.168.67.2:8443"
	I1013 21:51:40.819771  369419 api_server.go:166] Checking apiserver status ...
	I1013 21:51:40.819815  369419 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 21:51:40.831899  369419 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1228/cgroup
	W1013 21:51:40.840397  369419 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1228/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1013 21:51:40.840460  369419 ssh_runner.go:195] Run: ls
	I1013 21:51:40.844369  369419 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1013 21:51:40.849324  369419 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1013 21:51:40.849348  369419 status.go:463] multinode-611399 apiserver status = Running (err=<nil>)
	I1013 21:51:40.849372  369419 status.go:176] multinode-611399 status: &{Name:multinode-611399 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1013 21:51:40.849391  369419 status.go:174] checking status of multinode-611399-m02 ...
	I1013 21:51:40.849621  369419 cli_runner.go:164] Run: docker container inspect multinode-611399-m02 --format={{.State.Status}}
	I1013 21:51:40.866873  369419 status.go:371] multinode-611399-m02 host status = "Running" (err=<nil>)
	I1013 21:51:40.866894  369419 host.go:66] Checking if "multinode-611399-m02" exists ...
	I1013 21:51:40.867194  369419 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-611399-m02
	I1013 21:51:40.883831  369419 host.go:66] Checking if "multinode-611399-m02" exists ...
	I1013 21:51:40.884160  369419 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1013 21:51:40.884201  369419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-611399-m02
	I1013 21:51:40.901640  369419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/21724-226873/.minikube/machines/multinode-611399-m02/id_rsa Username:docker}
	I1013 21:51:40.996499  369419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 21:51:41.020768  369419 status.go:176] multinode-611399-m02 status: &{Name:multinode-611399-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1013 21:51:41.020814  369419 status.go:174] checking status of multinode-611399-m03 ...
	I1013 21:51:41.021129  369419 cli_runner.go:164] Run: docker container inspect multinode-611399-m03 --format={{.State.Status}}
	I1013 21:51:41.038739  369419 status.go:371] multinode-611399-m03 host status = "Stopped" (err=<nil>)
	I1013 21:51:41.038761  369419 status.go:384] host is not running, skipping remaining checks
	I1013 21:51:41.038769  369419 status.go:176] multinode-611399-m03 status: &{Name:multinode-611399-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.25s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-611399 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-611399 node start m03 -v=5 --alsologtostderr: (6.591875346s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-611399 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.29s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (57.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-611399
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-611399
E1013 21:51:54.285707  230929 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/functional-412292/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-611399: (29.578355203s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-611399 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-611399 --wait=true -v=5 --alsologtostderr: (27.821981278s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-611399
--- PASS: TestMultiNode/serial/RestartKeepsNodes (57.51s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-611399 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-611399 node delete m03: (4.429646131s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-611399 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.04s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (28.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-611399 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-611399 stop: (28.386014998s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-611399 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-611399 status: exit status 7 (91.431112ms)

                                                
                                                
-- stdout --
	multinode-611399
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-611399-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-611399 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-611399 status --alsologtostderr: exit status 7 (90.240965ms)

                                                
                                                
-- stdout --
	multinode-611399
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-611399-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 21:53:19.404386  378794 out.go:360] Setting OutFile to fd 1 ...
	I1013 21:53:19.404663  378794 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:53:19.404672  378794 out.go:374] Setting ErrFile to fd 2...
	I1013 21:53:19.404676  378794 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:53:19.404881  378794 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-226873/.minikube/bin
	I1013 21:53:19.405077  378794 out.go:368] Setting JSON to false
	I1013 21:53:19.405107  378794 mustload.go:65] Loading cluster: multinode-611399
	I1013 21:53:19.405164  378794 notify.go:220] Checking for updates...
	I1013 21:53:19.405541  378794 config.go:182] Loaded profile config "multinode-611399": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:53:19.405564  378794 status.go:174] checking status of multinode-611399 ...
	I1013 21:53:19.406083  378794 cli_runner.go:164] Run: docker container inspect multinode-611399 --format={{.State.Status}}
	I1013 21:53:19.425870  378794 status.go:371] multinode-611399 host status = "Stopped" (err=<nil>)
	I1013 21:53:19.425896  378794 status.go:384] host is not running, skipping remaining checks
	I1013 21:53:19.425905  378794 status.go:176] multinode-611399 status: &{Name:multinode-611399 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1013 21:53:19.425934  378794 status.go:174] checking status of multinode-611399-m02 ...
	I1013 21:53:19.426307  378794 cli_runner.go:164] Run: docker container inspect multinode-611399-m02 --format={{.State.Status}}
	I1013 21:53:19.444851  378794 status.go:371] multinode-611399-m02 host status = "Stopped" (err=<nil>)
	I1013 21:53:19.444871  378794 status.go:384] host is not running, skipping remaining checks
	I1013 21:53:19.444889  378794 status.go:176] multinode-611399-m02 status: &{Name:multinode-611399-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (28.57s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (34.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-611399 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-611399 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (34.338259014s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-611399 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (34.93s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (23.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-611399
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-611399-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-611399-m02 --driver=docker  --container-runtime=crio: exit status 14 (66.692422ms)

                                                
                                                
-- stdout --
	* [multinode-611399-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21724
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21724-226873/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-226873/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-611399-m02' is duplicated with machine name 'multinode-611399-m02' in profile 'multinode-611399'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-611399-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-611399-m03 --driver=docker  --container-runtime=crio: (20.520664481s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-611399
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-611399: exit status 80 (289.31733ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-611399 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-611399-m03 already exists in multinode-611399-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-611399-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-611399-m03: (2.378409735s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (23.31s)

                                                
                                    
x
+
TestPreload (107.19s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-333693 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-333693 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (46.554759031s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-333693 image pull gcr.io/k8s-minikube/busybox
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-333693
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-333693: (5.876633505s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-333693 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E1013 21:56:02.825290  230929 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/addons-143775/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-333693 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (51.120203978s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-333693 image list
helpers_test.go:175: Cleaning up "test-preload-333693" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-333693
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-333693: (2.439493284s)
--- PASS: TestPreload (107.19s)

                                                
                                    
x
+
TestScheduledStopUnix (98.02s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-408315 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-408315 --memory=3072 --driver=docker  --container-runtime=crio: (21.369105577s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-408315 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-408315 -n scheduled-stop-408315
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-408315 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1013 21:56:30.878132  230929 retry.go:31] will retry after 85.595µs: open /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/scheduled-stop-408315/pid: no such file or directory
I1013 21:56:30.879279  230929 retry.go:31] will retry after 171.329µs: open /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/scheduled-stop-408315/pid: no such file or directory
I1013 21:56:30.880443  230929 retry.go:31] will retry after 285.515µs: open /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/scheduled-stop-408315/pid: no such file or directory
I1013 21:56:30.881574  230929 retry.go:31] will retry after 497.182µs: open /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/scheduled-stop-408315/pid: no such file or directory
I1013 21:56:30.882710  230929 retry.go:31] will retry after 490.962µs: open /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/scheduled-stop-408315/pid: no such file or directory
I1013 21:56:30.883829  230929 retry.go:31] will retry after 578.632µs: open /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/scheduled-stop-408315/pid: no such file or directory
I1013 21:56:30.884938  230929 retry.go:31] will retry after 1.03718ms: open /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/scheduled-stop-408315/pid: no such file or directory
I1013 21:56:30.886063  230929 retry.go:31] will retry after 2.112756ms: open /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/scheduled-stop-408315/pid: no such file or directory
I1013 21:56:30.889267  230929 retry.go:31] will retry after 3.836491ms: open /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/scheduled-stop-408315/pid: no such file or directory
I1013 21:56:30.893466  230929 retry.go:31] will retry after 2.677689ms: open /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/scheduled-stop-408315/pid: no such file or directory
I1013 21:56:30.897058  230929 retry.go:31] will retry after 2.94655ms: open /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/scheduled-stop-408315/pid: no such file or directory
I1013 21:56:30.900256  230929 retry.go:31] will retry after 6.2724ms: open /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/scheduled-stop-408315/pid: no such file or directory
I1013 21:56:30.908058  230929 retry.go:31] will retry after 12.412629ms: open /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/scheduled-stop-408315/pid: no such file or directory
I1013 21:56:30.921302  230929 retry.go:31] will retry after 14.833305ms: open /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/scheduled-stop-408315/pid: no such file or directory
I1013 21:56:30.936544  230929 retry.go:31] will retry after 16.669676ms: open /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/scheduled-stop-408315/pid: no such file or directory
I1013 21:56:30.953825  230929 retry.go:31] will retry after 42.795184ms: open /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/scheduled-stop-408315/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-408315 --cancel-scheduled
E1013 21:56:54.285789  230929 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/functional-412292/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-408315 -n scheduled-stop-408315
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-408315
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-408315 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-408315
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-408315: exit status 7 (75.045242ms)

                                                
                                                
-- stdout --
	scheduled-stop-408315
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-408315 -n scheduled-stop-408315
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-408315 -n scheduled-stop-408315: exit status 7 (70.443875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-408315" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-408315
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-408315: (5.206174593s)
--- PASS: TestScheduledStopUnix (98.02s)

                                                
                                    
x
+
TestInsufficientStorage (9.81s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-240381 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-240381 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (7.305512946s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"3ad91507-7b80-417e-91ba-a38b8b098c87","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-240381] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"76020d1e-d093-4a80-be72-82d82f4848ef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21724"}}
	{"specversion":"1.0","id":"561b4c85-4e9b-4ea3-8f56-b6027d667100","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"59444ad1-8844-4b61-95a1-ef4cfc38d8aa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21724-226873/kubeconfig"}}
	{"specversion":"1.0","id":"bde2fbb4-9e1e-45cc-99d4-f4c6fd65a4ac","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-226873/.minikube"}}
	{"specversion":"1.0","id":"2a386408-60da-4209-aca9-0b05f8781bec","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"8216a494-43c2-49b8-994b-c9af6cc6ce03","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"23affd2b-4a86-4c9c-8bb3-b4833cb33501","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"b93cd6bb-71d2-4c08-a7e7-37f3fa14c430","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"1d7b5aac-28e7-42f3-b173-a2fdcc029a0b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"f510a431-84dc-455b-b0b5-224a7d794b3e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"ce8bdbd5-b92e-425d-89b0-927ffc53a9bb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-240381\" primary control-plane node in \"insufficient-storage-240381\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"69f6c44b-17eb-4dc5-99da-f712e0de7b15","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1760363564-21724 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"3d3c02b5-23cf-43bc-9c6a-82091e9ff0a9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"51d26c7c-fe32-4bd3-906a-6cecfa9956c6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-240381 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-240381 --output=json --layout=cluster: exit status 7 (292.708798ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-240381","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-240381","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1013 21:57:54.678694  399035 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-240381" does not appear in /home/jenkins/minikube-integration/21724-226873/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-240381 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-240381 --output=json --layout=cluster: exit status 7 (283.093265ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-240381","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-240381","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1013 21:57:54.962601  399145 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-240381" does not appear in /home/jenkins/minikube-integration/21724-226873/kubeconfig
	E1013 21:57:54.973497  399145 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/insufficient-storage-240381/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-240381" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-240381
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-240381: (1.92805234s)
--- PASS: TestInsufficientStorage (9.81s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (48.51s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.3179386573 start -p running-upgrade-850760 --memory=3072 --vm-driver=docker  --container-runtime=crio
E1013 21:59:05.900509  230929 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/addons-143775/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.3179386573 start -p running-upgrade-850760 --memory=3072 --vm-driver=docker  --container-runtime=crio: (24.191325777s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-850760 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-850760 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (21.423408234s)
helpers_test.go:175: Cleaning up "running-upgrade-850760" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-850760
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-850760: (2.440159326s)
--- PASS: TestRunningBinaryUpgrade (48.51s)

                                                
                                    
x
+
TestKubernetesUpgrade (318.61s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-050146 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-050146 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (42.088145266s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-050146
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-050146: (2.412326148s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-050146 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-050146 status --format={{.Host}}: exit status 7 (74.491729ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-050146 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-050146 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m24.21589194s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-050146 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-050146 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-050146 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (80.330042ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-050146] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21724
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21724-226873/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-226873/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-050146
	    minikube start -p kubernetes-upgrade-050146 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0501462 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-050146 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-050146 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-050146 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (5.473219113s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-050146" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-050146
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-050146: (4.196990124s)
--- PASS: TestKubernetesUpgrade (318.61s)

                                                
                                    
x
+
TestMissingContainerUpgrade (91s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.1484953308 start -p missing-upgrade-878493 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.1484953308 start -p missing-upgrade-878493 --memory=3072 --driver=docker  --container-runtime=crio: (42.944291991s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-878493
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-878493: (1.768044678s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-878493
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-878493 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-878493 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (43.417575727s)
helpers_test.go:175: Cleaning up "missing-upgrade-878493" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-878493
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-878493: (2.396843011s)
--- PASS: TestMissingContainerUpgrade (91.00s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.49s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.49s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (73.28s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.2336930156 start -p stopped-upgrade-126916 --memory=3072 --vm-driver=docker  --container-runtime=crio
E1013 21:58:17.358741  230929 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/functional-412292/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.2336930156 start -p stopped-upgrade-126916 --memory=3072 --vm-driver=docker  --container-runtime=crio: (43.323430127s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.2336930156 -p stopped-upgrade-126916 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.2336930156 -p stopped-upgrade-126916 stop: (12.439118434s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-126916 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-126916 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (17.520884128s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (73.28s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.04s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-126916
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-126916: (1.038256399s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.04s)

                                                
                                    
x
+
TestPause/serial/Start (43.56s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-253311 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-253311 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (43.555087332s)
--- PASS: TestPause/serial/Start (43.56s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-686990 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-686990 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (77.073776ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-686990] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21724
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21724-226873/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-226873/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (24.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-686990 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-686990 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (23.7488388s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-686990 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (24.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (29.5s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-686990 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-686990 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (27.029191931s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-686990 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-686990 status -o json: exit status 2 (352.020972ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-686990","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-686990
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-686990: (2.115133421s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (29.50s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (5.96s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-253311 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-253311 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (5.943289709s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (5.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (4.73s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-686990 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-686990 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (4.731637939s)
--- PASS: TestNoKubernetes/serial/Start (4.73s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-686990 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-686990 "sudo systemctl is-active --quiet service kubelet": exit status 1 (285.801825ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.82s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.82s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-686990
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-686990: (1.248132991s)
--- PASS: TestNoKubernetes/serial/Stop (1.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.8s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-686990 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-686990 --driver=docker  --container-runtime=crio: (6.797127641s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.80s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-686990 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-686990 "sudo systemctl is-active --quiet service kubelet": exit status 1 (339.851688ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-200102 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-200102 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (155.193252ms)

                                                
                                                
-- stdout --
	* [false-200102] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21724
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21724-226873/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-226873/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 22:00:42.158965  447825 out.go:360] Setting OutFile to fd 1 ...
	I1013 22:00:42.159250  447825 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:00:42.159259  447825 out.go:374] Setting ErrFile to fd 2...
	I1013 22:00:42.159264  447825 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:00:42.159483  447825 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-226873/.minikube/bin
	I1013 22:00:42.160041  447825 out.go:368] Setting JSON to false
	I1013 22:00:42.161224  447825 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6190,"bootTime":1760386652,"procs":424,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1013 22:00:42.161324  447825 start.go:141] virtualization: kvm guest
	I1013 22:00:42.163251  447825 out.go:179] * [false-200102] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1013 22:00:42.164615  447825 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 22:00:42.164675  447825 notify.go:220] Checking for updates...
	I1013 22:00:42.167252  447825 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 22:00:42.168770  447825 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-226873/kubeconfig
	I1013 22:00:42.170122  447825 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-226873/.minikube
	I1013 22:00:42.171406  447825 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1013 22:00:42.172560  447825 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 22:00:42.174494  447825 config.go:182] Loaded profile config "cert-expiration-894101": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:00:42.174636  447825 config.go:182] Loaded profile config "cert-options-442906": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:00:42.174777  447825 config.go:182] Loaded profile config "kubernetes-upgrade-050146": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:00:42.174895  447825 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 22:00:42.200215  447825 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1013 22:00:42.200318  447825 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 22:00:42.256918  447825 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-13 22:00:42.246274166 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652166656 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1013 22:00:42.257101  447825 docker.go:318] overlay module found
	I1013 22:00:42.258847  447825 out.go:179] * Using the docker driver based on user configuration
	I1013 22:00:42.260063  447825 start.go:305] selected driver: docker
	I1013 22:00:42.260079  447825 start.go:925] validating driver "docker" against <nil>
	I1013 22:00:42.260090  447825 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 22:00:42.261928  447825 out.go:203] 
	W1013 22:00:42.263228  447825 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1013 22:00:42.264403  447825 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-200102 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-200102

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-200102

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-200102

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-200102

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-200102

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-200102

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-200102

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-200102

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-200102

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-200102

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-200102" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-200102"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-200102" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-200102"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-200102" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-200102"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-200102

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-200102" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-200102"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-200102" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-200102"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-200102" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-200102" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-200102" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-200102" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-200102" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-200102" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-200102" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-200102" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-200102" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-200102"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-200102" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-200102"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-200102" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-200102"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-200102" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-200102"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-200102" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-200102"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-200102" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-200102" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-200102" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-200102" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-200102"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-200102" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-200102"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-200102" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-200102"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-200102" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-200102"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-200102" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-200102"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21724-226873/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 13 Oct 2025 22:00:36 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: cert-expiration-894101
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21724-226873/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 13 Oct 2025 21:58:51 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-050146
contexts:
- context:
cluster: cert-expiration-894101
extensions:
- extension:
last-update: Mon, 13 Oct 2025 22:00:36 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-894101
name: cert-expiration-894101
- context:
cluster: kubernetes-upgrade-050146
user: kubernetes-upgrade-050146
name: kubernetes-upgrade-050146
current-context: ""
kind: Config
users:
- name: cert-expiration-894101
user:
client-certificate: /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/cert-expiration-894101/client.crt
client-key: /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/cert-expiration-894101/client.key
- name: kubernetes-upgrade-050146
user:
client-certificate: /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/kubernetes-upgrade-050146/client.crt
client-key: /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/kubernetes-upgrade-050146/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-200102

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-200102" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-200102"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-200102" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-200102"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-200102" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-200102"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-200102" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-200102"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-200102" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-200102"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-200102" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-200102"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-200102" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-200102"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-200102" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-200102"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-200102" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-200102"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-200102" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-200102"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-200102" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-200102"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-200102" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-200102"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-200102" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-200102"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-200102" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-200102"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-200102" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-200102"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-200102" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-200102"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-200102" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-200102"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-200102" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-200102"

                                                
                                                
----------------------- debugLogs end: false-200102 [took: 3.183141962s] --------------------------------
helpers_test.go:175: Cleaning up "false-200102" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-200102
--- PASS: TestNetworkPlugins/group/false (3.51s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (47.77s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-534822 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
E1013 22:01:02.825084  230929 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/addons-143775/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-534822 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (47.766874354s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (47.77s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (51.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-080337 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-080337 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (51.309606053s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (51.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-534822 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [26402fa0-a911-42a3-ad38-62ca0dd617e3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [26402fa0-a911-42a3-ad38-62ca0dd617e3] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003144392s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-534822 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (15.99s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-534822 --alsologtostderr -v=3
E1013 22:01:54.285928  230929 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/functional-412292/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-534822 --alsologtostderr -v=3: (15.990470328s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (15.99s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (7.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-080337 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [b8938720-a9c3-41e9-8f57-5cd2919e55d7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [b8938720-a9c3-41e9-8f57-5cd2919e55d7] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 7.003138403s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-080337 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (7.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-534822 -n old-k8s-version-534822
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-534822 -n old-k8s-version-534822: exit status 7 (75.871094ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-534822 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (42.9s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-534822 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-534822 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (42.531221989s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-534822 -n old-k8s-version-534822
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (42.90s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (18.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-080337 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-080337 --alsologtostderr -v=3: (18.065425974s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (18.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-080337 -n no-preload-080337
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-080337 -n no-preload-080337: exit status 7 (92.513433ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-080337 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (47.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-080337 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-080337 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (46.981400115s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-080337 -n no-preload-080337
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (47.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-85qc8" [0023517f-8e99-45ca-9130-c16e98edc916] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003852942s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-85qc8" [0023517f-8e99-45ca-9130-c16e98edc916] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004010056s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-534822 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-534822 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (70.65s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-521669 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-521669 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m10.645089979s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (70.65s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (42.87s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-505851 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-505851 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (42.869524258s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (42.87s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-mkvmc" [8b62cb5c-c068-444e-a216-87c6c73d107b] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004079006s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-mkvmc" [8b62cb5c-c068-444e-a216-87c6c73d107b] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00421001s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-080337 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-080337 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (27.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-843554 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-843554 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (27.279816661s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (27.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (39.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-200102 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-200102 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (39.144458847s)
--- PASS: TestNetworkPlugins/group/auto/Start (39.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (7.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-505851 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [1f8454a6-017d-4521-b0a5-2f14f3d912b2] Pending
helpers_test.go:352: "busybox" [1f8454a6-017d-4521-b0a5-2f14f3d912b2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [1f8454a6-017d-4521-b0a5-2f14f3d912b2] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 7.005096307s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-505851 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (7.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (18.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-505851 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-505851 --alsologtostderr -v=3: (18.194560879s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (18.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (7.98s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-843554 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-843554 --alsologtostderr -v=3: (7.984368684s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (7.98s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-843554 -n newest-cni-843554
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-843554 -n newest-cni-843554: exit status 7 (76.688349ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-843554 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (10.68s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-843554 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-843554 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (10.313492898s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-843554 -n newest-cni-843554
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (10.68s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (7.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-521669 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [e6166149-7670-4cf2-b4fb-21490d127189] Pending
helpers_test.go:352: "busybox" [e6166149-7670-4cf2-b4fb-21490d127189] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [e6166149-7670-4cf2-b4fb-21490d127189] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 7.003911921s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-521669 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (7.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-505851 -n default-k8s-diff-port-505851
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-505851 -n default-k8s-diff-port-505851: exit status 7 (85.475821ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-505851 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (49.86s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-505851 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-505851 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (49.519398545s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-505851 -n default-k8s-diff-port-505851
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (49.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-200102 "pgrep -a kubelet"
I1013 22:04:27.492215  230929 config.go:182] Loaded profile config "auto-200102": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-200102 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-7jtzl" [bb3e536f-4eda-40eb-84a3-f6f0de51b108] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-7jtzl" [bb3e536f-4eda-40eb-84a3-f6f0de51b108] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.00472025s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-843554 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (16.84s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-521669 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-521669 --alsologtostderr -v=3: (16.836672989s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (16.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-200102 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-200102 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-200102 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (47.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-200102 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-200102 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (47.788917596s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (47.79s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-521669 -n embed-certs-521669
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-521669 -n embed-certs-521669: exit status 7 (86.809346ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-521669 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (49.54s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-521669 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-521669 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (49.135288322s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-521669 -n embed-certs-521669
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (49.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (52.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-200102 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-200102 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (52.617494084s)
--- PASS: TestNetworkPlugins/group/calico/Start (52.62s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-2xpgc" [17da7b5e-198c-4f31-80cf-d0fdcd8755c4] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003499208s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-2xpgc" [17da7b5e-198c-4f31-80cf-d0fdcd8755c4] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005434183s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-505851 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-glhzg" [4b41b6cb-6930-47d5-ac4a-8caa5b4466e9] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004197842s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-505851 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-200102 "pgrep -a kubelet"
I1013 22:05:31.927758  230929 config.go:182] Loaded profile config "kindnet-200102": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-200102 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-55zkb" [40056a92-020d-4100-9427-074635b1fe75] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-55zkb" [40056a92-020d-4100-9427-074635b1fe75] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.004418851s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-69m9v" [5cf6691f-8c78-4fe0-82df-81399a0025fc] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003457296s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (50.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-200102 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-200102 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (50.813996765s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (50.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-200102 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-200102 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-200102 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-69m9v" [5cf6691f-8c78-4fe0-82df-81399a0025fc] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004750795s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-521669 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-521669 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-r6ts6" [04357e44-6783-45c3-8951-e76ac35971d5] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-r6ts6" [04357e44-6783-45c3-8951-e76ac35971d5] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003995706s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-200102 "pgrep -a kubelet"
I1013 22:05:56.518575  230929 config.go:182] Loaded profile config "calico-200102": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-200102 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-9qvbh" [c834a256-e668-48de-b1e4-6e4211f7a194] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-9qvbh" [c834a256-e668-48de-b1e4-6e4211f7a194] Running
E1013 22:06:02.825985  230929 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/addons-143775/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.004096508s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (68.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-200102 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-200102 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m8.567706189s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (68.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (50.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-200102 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-200102 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (50.968838825s)
--- PASS: TestNetworkPlugins/group/flannel/Start (50.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-200102 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-200102 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-200102 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (71.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-200102 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-200102 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m11.264490831s)
--- PASS: TestNetworkPlugins/group/bridge/Start (71.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-200102 "pgrep -a kubelet"
I1013 22:06:31.687874  230929 config.go:182] Loaded profile config "custom-flannel-200102": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-200102 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-4pksq" [b546b692-5d26-454e-b66c-5d5975605b2c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-4pksq" [b546b692-5d26-454e-b66c-5d5975605b2c] Running
E1013 22:06:39.079131  230929 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/old-k8s-version-534822/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:06:39.085613  230929 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/old-k8s-version-534822/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:06:39.097054  230929 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/old-k8s-version-534822/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:06:39.118497  230929 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/old-k8s-version-534822/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:06:39.160060  230929 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/old-k8s-version-534822/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:06:39.241532  230929 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/old-k8s-version-534822/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:06:39.403157  230929 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/old-k8s-version-534822/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:06:39.724766  230929 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/old-k8s-version-534822/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:06:40.366791  230929 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/old-k8s-version-534822/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:06:41.648750  230929 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/old-k8s-version-534822/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.00363635s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-200102 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-200102 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-200102 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E1013 22:06:44.210589  230929 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/old-k8s-version-534822/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-7hzp8" [67a0b59f-8aab-4942-a5b1-bfe38d18643c] Running
E1013 22:06:59.574148  230929 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/old-k8s-version-534822/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003968529s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-200102 "pgrep -a kubelet"
I1013 22:07:01.583908  230929 config.go:182] Loaded profile config "flannel-200102": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (8.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-200102 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-jv4x6" [b1a93477-7941-4751-b243-0902e2d9659c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-jv4x6" [b1a93477-7941-4751-b243-0902e2d9659c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 8.003984574s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (8.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-200102 "pgrep -a kubelet"
I1013 22:07:08.233738  230929 config.go:182] Loaded profile config "enable-default-cni-200102": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-200102 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-lclcx" [c0434f56-8913-4522-839a-f7de76ea7146] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1013 22:07:09.046732  230929 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/no-preload-080337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-lclcx" [c0434f56-8913-4522-839a-f7de76ea7146] Running
E1013 22:07:14.168532  230929 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/no-preload-080337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 8.004465295s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-200102 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-200102 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-200102 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-200102 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-200102 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-200102 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-200102 "pgrep -a kubelet"
I1013 22:07:40.198501  230929 config.go:182] Loaded profile config "bridge-200102": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-200102 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-m6hnt" [3d833e15-a692-400e-b5bb-d77380a75d45] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-m6hnt" [3d833e15-a692-400e-b5bb-d77380a75d45] Running
E1013 22:07:44.892100  230929 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/no-preload-080337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.003960653s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-200102 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-200102 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-200102 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.09s)

                                                
                                    

Test skip (26/327)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-659143" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-659143
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-200102 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-200102

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-200102

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-200102

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-200102

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-200102

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-200102

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-200102

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-200102

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-200102

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-200102

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-200102" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-200102"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-200102" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-200102"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-200102" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-200102"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-200102

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-200102" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-200102"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-200102" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-200102"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-200102" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-200102" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-200102" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-200102" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-200102" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-200102" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-200102" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-200102" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-200102" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-200102"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-200102" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-200102"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-200102" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-200102"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-200102" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-200102"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-200102" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-200102"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-200102" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-200102" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-200102" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-200102" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-200102"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-200102" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-200102"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-200102" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-200102"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-200102" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-200102"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-200102" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-200102"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21724-226873/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 13 Oct 2025 22:00:36 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: cert-expiration-894101
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21724-226873/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 13 Oct 2025 22:00:39 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8555
name: cert-options-442906
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21724-226873/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 13 Oct 2025 21:58:51 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-050146
contexts:
- context:
cluster: cert-expiration-894101
extensions:
- extension:
last-update: Mon, 13 Oct 2025 22:00:36 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-894101
name: cert-expiration-894101
- context:
cluster: cert-options-442906
extensions:
- extension:
last-update: Mon, 13 Oct 2025 22:00:39 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-options-442906
name: cert-options-442906
- context:
cluster: kubernetes-upgrade-050146
user: kubernetes-upgrade-050146
name: kubernetes-upgrade-050146
current-context: cert-options-442906
kind: Config
users:
- name: cert-expiration-894101
user:
client-certificate: /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/cert-expiration-894101/client.crt
client-key: /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/cert-expiration-894101/client.key
- name: cert-options-442906
user:
client-certificate: /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/cert-options-442906/client.crt
client-key: /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/cert-options-442906/client.key
- name: kubernetes-upgrade-050146
user:
client-certificate: /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/kubernetes-upgrade-050146/client.crt
client-key: /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/kubernetes-upgrade-050146/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-200102

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-200102" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-200102"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-200102" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-200102"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-200102" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-200102"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-200102" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-200102"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-200102" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-200102"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-200102" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-200102"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-200102" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-200102"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-200102" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-200102"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-200102" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-200102"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-200102" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-200102"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-200102" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-200102"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-200102" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-200102"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-200102" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-200102"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-200102" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-200102"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-200102" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-200102"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-200102" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-200102"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-200102" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-200102"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-200102" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-200102"

                                                
                                                
----------------------- debugLogs end: kubenet-200102 [took: 3.365729258s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-200102" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-200102
--- SKIP: TestNetworkPlugins/group/kubenet (3.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-200102 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-200102

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-200102

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-200102

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-200102

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-200102

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-200102

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-200102

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-200102

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-200102

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-200102

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-200102" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-200102"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-200102" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-200102"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-200102" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-200102"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-200102

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-200102" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-200102"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-200102" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-200102"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-200102" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-200102" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-200102" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-200102" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-200102" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-200102" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-200102" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-200102" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-200102" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-200102"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-200102" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-200102"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-200102" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-200102"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-200102" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-200102"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-200102" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-200102"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-200102

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-200102

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-200102" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-200102" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-200102

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-200102

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-200102" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-200102" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-200102" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-200102" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-200102" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-200102" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-200102"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-200102" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-200102"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-200102" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-200102"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-200102" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-200102"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-200102" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-200102"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21724-226873/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 13 Oct 2025 22:00:36 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: cert-expiration-894101
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21724-226873/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 13 Oct 2025 21:58:51 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-050146
contexts:
- context:
cluster: cert-expiration-894101
extensions:
- extension:
last-update: Mon, 13 Oct 2025 22:00:36 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-894101
name: cert-expiration-894101
- context:
cluster: kubernetes-upgrade-050146
user: kubernetes-upgrade-050146
name: kubernetes-upgrade-050146
current-context: ""
kind: Config
users:
- name: cert-expiration-894101
user:
client-certificate: /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/cert-expiration-894101/client.crt
client-key: /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/cert-expiration-894101/client.key
- name: kubernetes-upgrade-050146
user:
client-certificate: /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/kubernetes-upgrade-050146/client.crt
client-key: /home/jenkins/minikube-integration/21724-226873/.minikube/profiles/kubernetes-upgrade-050146/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-200102

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-200102" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-200102"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-200102" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-200102"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-200102" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-200102"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-200102" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-200102"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-200102" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-200102"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-200102" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-200102"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-200102" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-200102"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-200102" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-200102"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-200102" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-200102"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-200102" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-200102"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-200102" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-200102"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-200102" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-200102"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-200102" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-200102"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-200102" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-200102"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-200102" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-200102"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-200102" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-200102"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-200102" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-200102"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-200102" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-200102"

                                                
                                                
----------------------- debugLogs end: cilium-200102 [took: 5.323797062s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-200102" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-200102
--- SKIP: TestNetworkPlugins/group/cilium (5.49s)

                                                
                                    
Copied to clipboard